News Release

Machine learning predicts how long museum visitors will engage with exhibits

Peer-Reviewed Publication

North Carolina State University

AI Tool Predicts User Engagement Time With Museum Exhibits

image: To determine how machine-learning programs might be able to predict user interaction times, researchers closely monitored museum visitors as they engaged with the interactive exhibit seen here. view more 

Credit: Jonathan Rowe, NC State University

In a proof-of-concept study, education and artificial intelligence researchers have demonstrated the use of a machine-learning model to predict how long individual museum visitors will engage with a given exhibit. The finding opens the door to a host of new work on improving user engagement with informal learning tools.

"Education is an important part of the mission statement for most museums," says Jonathan Rowe, co-author of the study and a research scientist in North Carolina State University's Center for Educational Informatics (CEI). "The amount of time people spend engaging with an exhibit is used as a proxy for engagement and helps us assess the quality of learning experiences in a museum setting. It's not like school - you can't make visitors take a test."

"If we can determine how long people will spend at an exhibit, or when an exhibit begins to lose their attention, we can use that information to develop and implement adaptive exhibits that respond to user behavior in order to keep visitors engaged," says Andrew Emerson, first author of the study and a Ph.D. student at NC State.

"We could also feed relevant data to museum staff on what is working and what people aren't responding to," Rowe says. "That can help them allocate personnel or other resources to shape the museum experience based on which visitors are on the floor at any given time."

To determine how machine-learning programs might be able to predict user interaction times, the researchers closely monitored 85 museum visitors as they engaged with an interactive exhibit on environmental science. Specifically, the researchers collected data on study participants' facial expressions, posture, where they looked on the exhibit's screen and which parts of the screen they touched.

The data were fed into five different machine-learning models to determine which combinations of data and models resulted in the most accurate predictions.

"We found that a particular machine-learning method called 'random forests' worked quite well, even using only posture and facial expression data," Emerson says.

The researchers also found that the models worked better the longer people interacted with the exhibit, since that gave them more data to work with. For example, a prediction made after a few minutes would be more accurate than a prediction made after 30 seconds. For context, user interactions with the exhibit lasted as long as 12 minutes.

"We're excited about this, because it paves the way for new approaches to study how visitors learn in museums," says Rowe. "Ultimately, we want to use technology to make learning more effective and more engaging."


The paper, "Early Prediction of Visitor Engagement in Science Museums with Multimodal Learning Analytics," will be presented at the 22nd ACM International Conference on Multimodal Interaction (ICMI '20), being held online Oct. 25-29. The paper was co-authored by Nathan Henderson, a Ph.D. student at NC State; Wookhee Min and Seung Lee, research scientists at NC State's CEI; James Minogue, an associate professor of teacher education and learning sciences at NC State; and James Lester, Distinguished University Professor of Computer Science and the director of CEI at NC State.

The work was done with support from the National Science Foundation under grant 1713545.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.