Elizabeth Phillips, Assistant Professor, Psychology, Human Factors/Applied Cognition, is conducting a study to examine two aspects of norm violation response in human-robot teams.
Specifically, she is investigating: (1) context-sensitive tradeoffs between rule-based and role-based responses, and (2) representations and mechanisms needed to facilitate role-based responses.
She is doing so by taking four steps.
First, she is identifying metrics to assess responses.
Second, she is investigating the tradeoffs between role- and rule-based responses.
Third, she is modeling the generation of the role-based responses and selection between role-based and rule-based responses.
Finally, she is using validation of these models to articulate new moral philosophical arguments.
This study is important because there is clear societal benefit for understanding the ethical implications of natural language generation (NLG) algorithms. NLG is a software process that turns structured data into natural language.
This work is also important for understanding how to design such algorithms to optimize ethical benefits.
Moreover, robots with both linguistic interaction and moral reasoning capabilities have the potential for significant societal impact in many different areas, such as eldercare and education.
This project will also have multidisciplinary research impacts on fields such as artificial intelligence, natural language processing, robotics, and machine ethics. It will be valuable to researchers in moral philosophy and moral psychology, as well.
Phillips received $171,917 from the National Science Foundation for this project. Funding began in August 2020 and will end in late September 2022.