News Release

$7.5 million grant will fund development of socially savvy artificial intelligence teammates

Grant and Award Announcement

University of Arizona

University of Arizona researchers have been awarded $7.5 million to create an artificial intelligence agent that can understand social cues and human interactions, and use that information to help teams achieve their goals.

The grant comes from the Defense Advanced Research Projects Agency and is part of DARPA's Artificial Social Intelligence for Successful Teams program.

"The goal of the ASIST program is to develop artificial intelligence with a 'theory of mind,' and create AI that is a good teammate to humans," said Adarsh Pyarelal, a research scientist in the Machine Learning for Artificial Intelligence Lab in the University of Arizona School of Information. Pyarelal is the principal investigator for the DARPA-funded project.

Theory of mind refers to humans' ability to infer the beliefs, desires and intentions of other humans. Popular existing AI agents, such as Siri, Alexa and the Google Assistant, do a good job scouring the internet for information, but they aren't good at reading social cues, Pyarelal said. For example, Siri doesn't know the difference between someone yelling at her or speaking politely.

"The thing that makes a human a good teammate is having a good sense of what other people on their team are thinking, so they can predict what their teammates are going to do with some level of certainty," Pyarelal said. "We're trying to build AI with some social awareness and this kind of theory of mind."

The UArizona project, funded over four years, is called ToMCAT, which stands for Theory of Mind-Based Cognitive Architecture for Teams. It is a collaboration among the School of Information in the College of Social and Behavioral Sciences, the Department of Computer Science in the College of Science and the Department of Family Studies and Human Development in the College of Agriculture and Life Sciences' Norton School of Family and Consumer Sciences.

Researchers will develop AI agents that they will test in a Minecraft video game environment. The agents will be partnered with one to four human players and will collaborate with the players to complete custom-designed Minecraft missions. Throughout the missions, the AI agents will gather information about the individual players, as well as their interactions with their other human teammates, through a variety of digital and physical sensors.

Webcams and microphones will record human players' eye movements, facial expressions and voices, and an electrocardiogram machine will monitor the electrical activity of their hearts. Players also will wear a cap that measures brain activity using two brain imaging techniques - electroencephalography, which detects electrical activity in the brain, and functional near-infrared spectroscopy, which uses skull-penetrating infrared light waves to map blood flow in the brain.

In the first phase of the project, ToMCAT agents will merely observe and gather information about individual players through the sensors. Later, the agents will observe and collaborate with up to four human players in a team. Eventually, the goal is that the AI agent will have enough information about the individual teammates and the team's goals and social dynamics to suggest interventions, via plain-text chat messages, to help the team meet its objectives.

"The AI agent is going to provide helpful suggestions or check in from time to time about the situation," Pyarelal said. "So, everything comes full circle - the agent observes, it learns, and then, if needed, it can intervene to help the team. For example, if person A and person B don't get along, the AI agent may suggest that maybe you don't want to put them on the same team."

The ToMCAT project aligns with the University of Arizona's continued focus on the Fourth Industrial Revolution, which is characterized by the increasing convergence of the digital, physical and biological words.

"Artificial intelligence and machine learning are increasingly present and important in our everyday lives, and the development of new technologies and understanding in this area is a key focus for the University of Arizona," said University of Arizona President Robert C. Robbins. "With DARPA's support, our talented researchers have a tremendous opportunity to take the next step toward making AI even more beneficial to our daily routines and interactions."

Assistance in Stressful Situations

Although socially aware artificial intelligence might sound like the stuff of science-fiction nightmares to some, researchers say it could have valuable practical applications.

"Whether you think it's helpful to have these kinds of tools to help us out in tricky social circumstances might be a question of taste and philosophy," said co-principal investigator Kobus Barnard, a professor of computer science. "But given that we've got all kinds of other ways to have computers help us - to remember things, to keep appointments, to communicate - it's a pretty reasonable expectation that a system could understand social behavior and how people interact and relationships. And there could be opportunities for additional capability of the digital assistant."

Researchers say AI with some degree of social awareness has the potential to be particularly useful in high-stress scenarios.

"Think of a search and rescue situation where you're a first responder and you're trying to navigate a building," Pyarelal said. "It might be a very tense situation where people are screaming. Trying to manage these socially charged, tense situations is something I can see an AI teammate helping with. Maybe it can say, 'Hey, it looks like your heart rate is going through the roof; you should take a moment to breathe.'"

Victims in disaster situations could potentially benefit as well, Pyarelal said. For example, a trapped or injured person found by a robot might have to spend several hours with the machine before human contact can be made.

"Not only does the rescued person need to be kept apprised of the situation, it would help matters a lot if the robot had basic social skills that helped buoy the rescued person's spirits until they were able to be evacuated," Pyarelal said.

Insight into Human Interactions

Data collection for the ToMCAT project will take place in the Lang Lab for Family and Child Observational Research in the Norton School's Frances McClelland Institute for Children, Youth and Families.

Text exchanges between the human and AI agent will be analyzed by researchers in the university's Computational Language Understanding Lab, using natural language processing to determine what the humans are doing and how they feel about each other and the mission.

In addition to aiding in AI development, the experiment will also provide valuable data on how humans interact with one another. That will be the primary focus of co-principal investigator Emily Butler, a professor of family studies and human development and lead social scientist on the ToMCAT project.

"My area of interest is interpersonal coordination and how people get themselves organized as a more dynamic system. It's going beyond the individual to think of how a whole group of people coordinate their efforts," Butler said. "In this case, we have a chance to look at multiperson teams, and the most exciting thing will be this multiperson brain scanning that has only really been possible for about 15 years. Being able to get full brain activity from multiple people in real time as they interact will provide us with rich data that we're hoping to be able to use to understand complex interpersonal coordination, both with regard to cognition and emotions."

The UArizona research team is one of 12 teams selected to receive DARPA funding through the ASIST program. The other co-PIs on the ToMCAT project are Clayton Morrison in the School of Information and Rebecca Sharp, Mihai Surdeanu and Marco Antonio Valenzuela-Escarcega in the Department of Computer Science.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.