Fake news and the coverage of it is a topic recently at the forefront of the minds of most Americans. What it means for a source of information to be trustworthy, and why, is an increasingly important issue.
Ava Thomas Wright, a doctoral candidate in philosophy, uses value theory and epistemology to study how lies undermine trust in expert sources.
For her dissertation research, Wright is examining whether experts have a legal “duty of veracity” when reporting their knowledge.
“The intuition that drives my dissertation is that in a civil society, authoritative sources such as experts ought to be trustworthy- or at least not lie to us- when reporting knowledge that we ourselves cannot or would have great difficulty learning on our own.”
Examples, according to Wright, include lies about whether smoking causes cancer or whether global warming can be attributed to human activity.
“My dissertation at its broadest is about how trust in social sources of expert information is one of the foundations of a civil society and what that implies legally, within the limits imposed by the freedom of expression.
“When those clad in authority lie to us about verifiable expert knowledge such as scientific knowledge, they strike at something fundamental to a just society, because such lies in principle rob people of their ability to use expert social knowledge in their practical reasoning and decision-making.
Wright stresses that lies “that in principle would make expert social knowledge impossible” seem not only unethical but unjust.
“My dissertation research has become more important as public trust in traditional media sources and other institutional sources of knowledge has eroded in the wake of various social media disinformation campaigns and “fake news” stories. Universal consent to law is impossible to secure in a society steeped in lies and propaganda.”
Wright is also completing a master’s degree in Artificial Intelligence, in which she studies how to regulate Artificially Intelligent (AI) machines.
For her master’s thesis, Wright is exploring how best to regulate “semi-autonomous” artificially intelligent (AI) agents.
These are agents that are capable of learning in an open-ended way and whose behavior is therefore to some extent unforeseeable by design.
“I’m interested in both the ethical and legal principles appropriate for regulating such agents, as well as the technical problem of how to go about implementing those principles in an actual agent.”
Wright credits UGA with the opportunity to simultaneously acquire both the normative and technical knowledge needed to address a question like this one in the course of her master’s and doctoral degree programs.
“While I already tend to work at the border between value theory and epistemology within philosophy, interdisciplinary work between philosophy and computer science is needed to make significant headway on problems in the general area of ‘ethics of AI.’”
After graduation, Wright plans to seek an academic job in philosophy.
“I would like to continue my current research, which in general explores the normative (and especially political) implications of the social character of our knowledge, as well as continue to work and teach in areas such as artificial intelligence or logic or cognitive science, if possible, as I tend to thrive on a diversity of topics and approaches to problems.”