News Release

When is the brain like a subway station? When it’s processing many words at once

New study maps how we simultaneously process different words

Peer-Reviewed Publication

New York University

How the Brain Works to Decode Words

image: 

The figure shows how the brain works to decode the different aspects of words over time, with phonetics (i.e., sounds) processed first and most quickly and semantic meaning coming later and taking longer.

view more 

Credit: Laura Gwilliams

Trains move through the world’s subway stations in a consistent pattern: arriving, stopping, and moving to the next stop—and repeated by other trains throughout the day. A new study by a team of New York University psychology and linguistics researchers finds that our brains work much the same way when processing several words at once—as we routinely do when listening to others speak.

The work, which uncovers new ways the brain functions, appears in the journal Proceedings of the National Academy of Sciences

“We found that the brain juggles competing demands by moving information to different parts of the brain over time,” explains Laura Gwilliams, the paper’s lead author, who conducted the study as a New York University doctoral student. “This means that multiple sources of information can be processed at the same time, without them interfering with each other. This is similar to a subway system: by the time the next train arrives at the station, the previous train has moved along to the next stop.”

“The brain’s coding system elegantly balances the preservation of information over time with minimizing overlap among different words and sounds,” adds Alec Marantz, an NYU psychology and linguistics professor and one of the study’s authors. “This system provides a clear view of how the brain may organize and interpret rapidly unfolding speech in real time, linking the processing of language to their neurological foundations.”

It’s been long established that in order to understand speech, the brain works to turn sound into meaning. Specifically, when listening to others, the brain extracts a hierarchy of information: the sounds the person is saying, followed by the syllables they are saying, the words, the phrases, and, finally, their meanings.

However, little is known about how the brain continuously coordinates these hierarchies as it rapidly processes a volley of incoming words. The PNAS study—whose authors also included David Poeppel, an NYU professor of psychology, and Jean-Remi King, now a researcher at Meta—aimed to clarify this dynamic. 

Previous work by Gwilliams, King, Marantz, and Poeppel shed some light on this phenomenon—it showed how the brain “time stamps” sounds in order to correctly understand what we hear. 

In the new PNAS study, the same researchers conducted a series of experiments in which the study’s participants—all native English speakers—listened to two hours of short stories in English on audiobooks. While doing so, magnetoencephalography (MEG) measured the magnetic fields generated by the electrical activity of participants’ brains.

These MEG readings allowed the scientists to examine the different levels of processing—or linguistic “families” (e.g., sound, word form, semantic meaning)—that were taking place in order to turn sounds of speech into their verbal significance. The readings showed the way sounds moved across different areas of the brain—a neurological traffic map that illuminated how the brain changed over time in juggling multiple words simultaneously.

“We found that this dynamic process happens across all levels of the hierarchy in parallel,” notes Gwilliams, now a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science and an assistant professor in Stanford’s Department of Psychology. “Our results also showed that the speed information travels between different neural patterns depends on the level of the feature in the hierarchy: sounds move around quickly and word meanings move around more slowly.” 

The researchers labeled this process Hierarchical Dynamic Coding.

“We have historically underestimated the intricate dynamics of neural processing,” concludes Gwilliams. “It was assumed that there is a 1:1 link between brain area and function. Our study shows that, in fact, a single feature of the input is passed between a number of brain areas over time—like a train moving between subway stations.” 

This research was supported, in part, by a grant from the National Institutes of Health (R01DC05660).

# # #

 


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.