News Release

Can AI read humans’ minds? A new model shows it’s shockingly good at it

A breakthrough in artificial intelligence that can accurately predict pedestrian behaviors may redefine autonomous vehicles operations, traffic safety and the future of human-AI interactions.

Peer-Reviewed Publication

Texas A&M University

Dr. Srinkanth Saripalli and the Texas A&M University research team’s new breakthrough AI pedestrian system.

image: 

Dr. Srinkanth Saripalli and the Texas A&M University research team’s new breakthrough AI pedestrian system.

view more 

Credit: Texas A&M University College of Engineering

In a striking leap toward safer self-driving cars, researchers at Texas A&M University College of Engineering and the Korea Advanced Institute of Science and Technology have unveiled a new artificial intelligence (AI) system called OmniPredict.

The AI is the first system to apply a Multimodal Large Language Model (MLLM) to predict human pedestrian behaviors, tapping into the same technology that powers advanced chatbots and image recognition systems. But instead of generating texts or describing images, it combines visual cues with contextual information to predict in real time what pedestrians are likely to do next.

The AI’s results and performance? Early tests are already turning heads, suggesting that OmniPredict performs exceedingly accurately, even without specialized training.

“Cities are unpredictable. Pedestrians can be unpredictable,” said Dr. Srinkanth Saripalli, the project’s lead researcher and director of the Center for Autonomous Vehicles and Sensor Systems. “Our new model is a glimpse into a future where machines don’t just see what’s happening, they anticipate what humans are likely to do, too.”

A new kind of ‘street smarts’

In the race to make self-driving cars safer, OmniPredict introduces a new level of street smarts — one that inches closer to human intuition.

Rather than simply reacting to what pedestrians are doing, it anticipates what they’re about to do. This shift could redraw the blueprint of urban mobility, and how autonomous vehicles navigate crowded streets.

“It opens the doors for safer autonomous vehicle operation, fewer pedestrian-related incidents and a shift from reacting to proactively preventing danger,” Saripalli said.

The psychological landscape could shift too.

Imagine standing at a crosswalk and, instead of locking eyes with a human driver, knowing that an AI vehicle is tracking your position and is planning around your next likely move.

“Fewer tense standoffs. Fewer near-misses. Streets might even flow more freely. All because vehicles understand not only motion, but most importantly, motives,” Saripalli said.

Beyond crosswalks: Reading human behavior in complex environments

OmniPredict’s implications extend far beyond bustling city streets, chaotic intersections or crowded crosswalks.

“We are opening the door for exciting applications,” Saripalli said. “For instance, the possibility of a machine to capably detect, recognize and predict outcomes of a person displaying threatening cues could have important implications.”

Broadly, an AI system that reads posture changes, hesitation, body orientation or signs of stress could be a game-changer for personnel involved in military and emergency operations.

“It could help flag and alert early indicators of risk, or even provide an extra layer of situational awareness,” Saripalli said.

In these scenarios, the new approach might give personnel the ability to rapidly interpret complex environments and make faster, more informed decisions.

“Our goal in the project isn’t to replace humans, but to help augment them with a smarter partner,” said Saripalli.

Putting it to the test

Traditional self-driving systems rely on computer-vision models trained on thousands of datasets and images. While powerful, these models struggle to adapt in changing conditions.

“Weather changes, people behaving unexpectedly, rare events and the chaotic elements of a city street all could possibly affect even the most sophisticated vision-based systems,” Saripalli said.

OmniPredict takes a different approach.

The result is an AI that doesn’t just see a scene; it interprets it and anticipates how each element might move, adjusting in real time.

The team tested OmniPredict against two of the toughest benchmarks for pedestrian—JAAD and WiDEVIEW datasets— behavior research, without having administered any prior specialized training.

The findings, published in Computers & Engineering, reported that OmniPredict delivered a resounding 67% accuracy, outperforming the latest models by 10%.

It even maintained performance when the researchers added contextual information, like partially hidden pedestrians or people looking toward a vehicle.

The AI also showed faster response speeds, stronger generalization across different road contexts and more robust decision-making than traditional systems — encouraging signs for future real-world deployment.

“OmniPredict’s performance is exciting, and its flexibility hints at much broader real-world potential,” Saripalli said.

Turning the corner of autonomy and anticipation

While still a research model and not a road-ready system, OmniPredict points to a future where autonomous vehicles rely less on brute force visual learning and more on behavioral reasoning.

By combining reasoning with perception, the system unlocks and enables a new kind of shared intelligence — where the world isn’t just getting automated, it’s getting profoundly more intuitive.

“OmniPredict doesn’t just see what we do, it understands why we do it and can now predict when we are likely to do an action,” Saripalli said.

If AI-powered cars can read our next move, the road ahead just got a whole lot smarter.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.