News Release

Partner predictions fare better than either AI or humans alone

Peer-Reviewed Publication

Tsinghua University Press

Artificial intelligence (AI) can assess far more data far more quickly than any single human can do. With such immense pools of information, AI should be able to consider past data, process all the implications and produce a reliable prediction better than a human — right? That may not always be the case, according to a multi-institution research team who examined the synergies between how humans and AI make predictions.

They publish their results on Aug. 23 in Journal of Social Computing, issued through Tsinghua University Press.

“Predictive tasks are ubiquitous — any decision-making in any field or facet of life involves predicting the consequences of the available options before choosing them,” said paper author Scott E. Page, professor at University of Michigan’s Ross Business School. “Understanding the perils and promises of these assemblages and crafting a proper balance between the two is a major concern moving forward.”

The concern arises, according to Page, from the relatively recent shift from predictions made on experience, some data and gut instinct to predictions made based on data and the considerations AI systems are programmed to make.

“The increased accuracy resulting from the application of evermore powerful algorithms to ever larger databases, begs the question: should humans remain in the predictive arena at all, or should we leave prediction to algorithms entirely?” Page asked.

The answer, the researchers found, is a resounding no. How humans approach predictions is far more nuanced than AI methods, which can make the critical difference for an accurate forecast.

According to Page, the AI handles big data well, while humans are better equipped to analyze what the researchers call “thick” data. Rather than consisting of many data points of the same type of data, like big data, thick data’s fewer data points can tell a richer story. For example, years of statistical data may allow AI to predict how many homeruns a baseball player may hit, but a human is more likely to understand how a well-liked team player may have a longer career.

“Big data and thick data working together will produce more accurate collective predictions,” Page said. “Thick data can catch and draw attention to constellations of factors that might slip through the cracks between separated big data variables. Even though big data cast a wider net, that net contains holes.”

The researchers put this idea to the test by mathematically testing how weighing human and AI inputs might result in different predictions. They found that in typical cases, meaning future outcomes depend on past outcomes, AI did not need human input to make accurate of predictions. However, in atypical cases with more unknown or surprising factors, humans helped the AI reduce potential errors.

“So long as humans can continue to identify different attributes, that is, continue to construct thicker data, or better understand atypical cases, they will continue to increase accuracy,” Page said. “Rather than a competition between humans and computers, the future of hybrid predictors will be a complex search for symbiosis.”

The researchers plan to continue exploring how partnered systems of AI and humans can help improve their predictions, including how multiple systems working together may give even more accurate results.

“The particulars cannot be known, but we can almost certainly predict that the roles and contributions of the participants will both adapt to ever growing data and greater computational power,” Page said. “The present and future of cognitive work will surely involve a mangle of humans, algorithms, datasets, subjects, objects, and domains. As they seek to understand the work, these hybrid groups will also shape it.”

Other contributors include first author Lu Hong, Department of Finance, Loyola University; and PJ Lamberson, Department of Communication, University of California, Los Angeles.

The National Institutes of Health’s (RO1GM112938) Institute of General Medicine supported this work.

###

About Journal of Social Computing 

Journal of Social Computing (JSC) is an open access, peer-reviewed scholarly journal which aims to publish high-quality, original research that pushes the boundaries of thinking, findings, and designs at the dynamic interface of social interaction and computation. This will include research in (1)computational social science—the use of computation to learn from the explosion of social data becoming available today; (2) complex social systems or the analysis of how dynamic, evolving social collectives constitute emergent computers to solve their own problems; and (3) human computer interaction whereby machines and persons recursively combine to generate unique knowledge and collective intelligence, or the intersection of these areas. The editorial board welcomes research from fields ranging across the social sciences, computer and information sciences, physics and ecology, communications and linguistics, and, indeed, any field or approach that can challenge and advance our understanding of the interface and integration of computation and social life. We seek to take risks, avoid boredom and court failure on the path to transformative new paradigms, insights, and possibilities.  The journal is open to a diversity of theoretic paradigms, methodologies and applications.

About Tsinghua University Press

Established in 1980, Tsinghua University Press (TUP) is a first-class comprehensive publisher in China. It publishes about 2,800 new titles in print and digital format annually, from STEM, social science & humanities to ELT, and has released 33 English and Chinese journals. TUP has a long history of close collaboration with many world-renowned publishers through copyright trade and co-publishing.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.