Feature Story | 22-Apr-2026

Who bears the blame when AI gets it wrong? UVA expert warns of a future where humans become "perpetual scapegoats"

As AI expands into high-stakes fields, philosopher and data scientist David Danks says current accountability frameworks are failing — and proposes new legal structures to address the gap

University of Virginia School of Data Science

Artificial intelligence is informing decisions in emergency rooms, navigating roads without human drivers, and enabling weapons to attack targets without human input — yet the legal and ethical frameworks for assigning responsibility when these systems fail remain dangerously underdeveloped. That's the warning from David Danks, distinguished university professor of philosophy, AI, and data science at the University of Virginia.

Danks, who holds dual appointments in UVA's Department of Philosophy and School of Data Science, argues that society is drifting toward a default solution that is both unjust and unsustainable: making humans the perpetual scapegoats for AI errors they had no meaningful ability to prevent.

“We’re starting to see some signs that this might be the future we’re moving to, where a radiologist, for example, has to sign off on every AI diagnosis, but they aren’t given the time to actually second-guess the AI diagnosis,” he said.

Danks says the more equitable path runs through existing legal frameworks, particularly product liability law, which could hold AI developers and deploying organizations responsible when their systems cause harm. He also raises the possibility of a more novel solution: extending a form of legal personhood to AI systems themselves, allowing them to carry insurance and hold assets that could be seized as compensation when errors occur.

"Much as we allow corporations in the United States to be legal persons, perhaps algorithms and systems need to start having some form of legal personhood," he said.

Danks expects the accountability crisis to come to a head first in consumer robotics and autonomous vehicles — sectors where AI systems act independently of any human operator and questions of liability cannot be deflected. 

"Imagine you purchase a self-driving car. Who bears the liability when it gets into an accident when you're not in the car?" he said. "This is all going to have to get worked out in the next five to ten years."

With the rapidly changing technological landscape, companies are rushing to release the latest and greatest AI tools, often at the expense of responsible deployment, Danks says. He is skeptical of the urgency narrative itself. 

"A lot of the race narrative is actually self-generated by the companies, rather than reflecting actual economic pressures," he said, adding that responsible development practices are ultimately cheaper than fixing problems after deployment.

Danks is currently conducting research on AI agency and managerial moral responsibility through a fellowship awarded by the LaCross AI Institute at UVA's Darden School of Business, one of eight UVA faculty members recognized for research into ethical AI in 2026.

About the University of Virginia School of Data Science The UVA School of Data Science advances the responsible use of data for the benefit of society through education, research, and public engagement. Learn more at datascience.virginia.edu.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.