News Release

Pusan National University study reveals a shared responsibility of both humans and AI in AI-caused harm

Researchers highlight the need for distributed model of AI responsibility, where duties are shared among humans and AI systems

Peer-Reviewed Publication

Pusan National University

Rethinking moral agency and responsibility in the era of artificial intelligence (AI)

image: 

While the lack of consciousness and free-will makes it difficult to blame the AI systems for the harms caused by them, it is also not possible to blame the human developers as they cannot predict the mistakes. Researchers have demonstrated that a distributed model of AI responsibility, where duties are shared among humans and AI systems can reinforce ethical practices in both the design and use of AI systems.

view more 

Credit: Dr. Hyungrae Noh from Pusan National University, Republic of Korea

Artificial intelligence (AI) is becoming an integral part of our everyday lives and with that emerges a pressing question: Who should be held responsible, when AI goes wrong? AI lacks consciousness and free-will, which makes it difficult to blame the system for the mistakes. AI systems operate through complex, opaque processes in a semi-autonomous manner. Hence, even though the systems are developed and used by human stakeholders, it is impossible for them to predict the harm. The traditional ethical frameworks thus fail to explain who is responsible for these harms, leading to the so-called responsibility gap in AI ethics.

A recent study by Dr. Hyungrae Noh, an Assistant Professor of Philosophy at Pusan National University, Republic of Korea, elucidates the philosophical and empirical issues surrounding moral responsibility in the context of AI systems. The study critiques traditional moral frameworks centered on human psychological capacities such as intention and free will, which make it practically impossible to ascribe responsibility to either AI systems or human stakeholders. The findings of the study were published in the journal of Topoi on November 06, 2025.

“With AI-technologies becoming deeply integrated in our lives, the instances of AI-mediated harm are bound to increase. So, it is crucial to understand who is morally responsible for the unforeseeable harms caused by AI,” says Dr. Noh.

AI systems cannot be blamed for harm under traditional ethical frameworks. These frameworks typically require an agent to possess certain mental capacities to be held morally responsible. In case of AI, there is a lack of conscious understanding, i.e., capacity to understand the moral significance of their actions. Moreover, AI systems do not go through subjective experiences, leading to a lack of phenomenal consciousness. The systems are not given full control over their behavior and decisions. They also lack intention, the capacity for deliberate decision-making that underlies actions. Lastly, these systems often lack the ability to answer or provide any explanation regarding their actions. Due to these gaps, it is not right to hold the systems responsible.

The study also sheds light on Luciano Floridi’s non-anthropocentric theory of agency and responsibility in the domain of AI, which is also supported by other researchers from the field. This theory replaces traditional ethical frameworks with the idea of censorship, according to which human stakeholders have a duty to prevent AI from causing harm by monitoring and modifying the systems, and by disconnecting or deleting them when necessary. The same duty also extends to AI systems themselves, if they possess a sufficient level of autonomy.

“Instead of insisting traditional ethical frameworks in contexts of AI, it is important to acknowledge the idea of distributed responsibility. This implies a shared duty of both human stakeholders—including programmers, users, and developers—and AI agents themselves to address AI-mediated harms, even when the harm was not anticipated or intended. This will help to promptly rectify errors and prevent their recurrence, thereby reinforcing ethical practices in both the design and use of AI systems,” concludes Dr. Noh.

 

***

 

Reference
DOI: 10.1007/s11245-025-10302-4

 

About Pusan National University
Pusan National University, located in Busan, South Korea, was founded in 1946 and is now the No. 1 national university of South Korea in research and educational competency. The multi-campus university also has other smaller campuses in Yangsan, Miryang, and Ami. The university prides itself on the principles of truth, freedom, and service and has approximately 30,000 students, 1,200 professors, and 750 faculty members. The university comprises 14 colleges (schools) and one independent division, with 103 departments in all.

Website: https://www.pusan.ac.kr/eng/Main.do

 

About the author
Dr. Hyungrae Noh is an Assistant Professor of Philosophy at Pusan National University. Prior to joining Pusan National University, he was an Associate Professor of Philosophy at Sunchon National University and earned his PhD from the University of Iowa in 2019. His research focuses on scientific explanations of the mind, investigating the minimal conditions for consciousness using neuroscientific and evolutionary approaches. He also explores the moral implications of AI from everyday (folk psychological) perspectives.

 

Lab: https://sites.google.com/view/hyungraenoh

ORCID id: 0000-0001-9503-6222


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.