News Release

AI as mediator: 'Smart' replies help humans communicate during pandemic

Peer-Reviewed Publication

Cornell University

ITHACA, N.Y. - Daily life during a pandemic means social distancing and finding new ways to remotely connect with friends, family and co-workers. And as we communicate online and by text, artificial intelligence could play a role in keeping our conversations on track, according to new Cornell University research.

Humans having difficult conversations said they trusted artificially intelligent systems - the "smart" reply suggestions in texts - more than the people they were talking to, according to a new study, "AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust," published online in the journal Computers in Human Behavior.

"We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system," said Jess Hohenstein, a doctoral student in the field of information science and the paper's first author. "This introduces a potential to take AI and use it as a mediator in our conversations."

For example, the algorithm could notice things are going downhill by analyzing the language used, and then suggest conflict-resolution strategies, Hohenstein said.

The study was an attempt to explore the myriad ways - both subtle and significant - that AI systems such as smart replies are altering how humans interact. Choosing a suggested reply that's not quite what you intended to say, but saves you some typing, might be fundamentally altering the course of your conversations - and your relationships, the researchers said.

"Communication is so fundamental to how we form perceptions of each other, how we form and maintain relationships, or how we're able to accomplish anything working together," said co-author Malte Jung, assistant professor of information science and director of the Robots in Groups lab, which explores how robots alter group dynamics.

"This study falls within the broader agenda of understanding how these new AI systems mess with our capacity to interact," Jung said. "We often think about how the design of systems affects how we interact with them, but fewer studies focus on the question of how the technologies we develop affect how people interact with each other."

In addition to shedding light on how people perceive and interact with computers, the study offers possibilities for improving human communication - with subtle guidance and reminders from AI.

Hohenstein and Jung said they sought to explore whether AI could function as a "moral crumple zone" - the technological equivalent of a car's crumple zone, designed to deform in order to absorb the crash's impact.

"There's a physical mechanism in the front of the car that's designed to absorb the force of the impact and take responsibility for minimizing the effects of the crash," Hohenstein said. "Here we see the AI system absorb some of the moral responsibility."

###

The research was partly supported by the National Science Foundation.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.