News Release

How your brain understands language may be more like AI than we ever imagined

Peer-Reviewed Publication

The Hebrew University of Jerusalem

A new study reveals that the human brain processes spoken language in a sequence that closely mirrors the layered architecture of advanced AI language models. Using electrocorticography data from participants listening to a narrative, the research shows that deeper AI layers align with later brain responses in key language regions such as Broca’s area. The findings challenge traditional rule-based theories of language comprehension and introduce a publicly available neural dataset that sets a new benchmark for studying how the brain constructs meaning.

In a study published in Nature Communications, researchers led by Dr. Ariel Goldstein of the Hebrew University in collaboration with Dr. Mariano Schain from Google Research along with Prof Uri Hasson and Eric Ham from Princeton University, uncovered a surprising connection between the way our brains make sense of spoken language and the way advanced AI models analyze text. Using electrocorticography recordings from participants listening to a thirty-minute podcast, the team showed that the brain processes language in a structured sequence that mirrors the layered architecture of large language models such as GPT-2 and Llama 2.

What the Study Found

When we listen to someone speak, our brain transforms each incoming word through a cascade of neural computations. Goldstein’s team discovered that these transformations unfold over time in a pattern that parallels the tiered layers of AI language models. Early AI layers track simple features of words, while deeper layers integrate context, tone, and meaning. The study found that human brain activity follows a similar progression: early neural responses aligned with early model layers, and later neural responses aligned with deeper layers.

This alignment was especially clear in high-level language regions such as Broca’s area, where the peak brain response occurred later in time for deeper AI layers. According to Dr. Goldstein, “What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models. Even though these systems are built very differently, both seem to converge on a similar step-by-step buildup toward understanding”

Why It Matters

The findings suggest that artificial intelligence is not just a tool for generating text. It may also offer a new window into understanding how the human brain processes meaning. For decades, scientists believed that language comprehension relied on symbolic rules and rigid linguistic hierarchies. This study challenges that view. Instead, it supports a more dynamic and statistical approach to language, in which meaning emerges gradually through layers of contextual processing.

The researchers also found that classical linguistic features such as phonemes and morphemes did not predict the brain’s real-time activity as well as AI-derived contextual embeddings. This strengthens the idea that the brain integrates meaning in a more fluid and context-driven way than previously believed.

A New Benchmark for Neuroscience

To advance the field, the team publicly released the full dataset of neural recordings paired with linguistic features. This new resource enables scientists worldwide to test competing theories of how the brain understands natural language, paving the way for computational models that more closely resemble human cognition.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.