News Release

Reichman University researchers reveal how the brain dynamically reconfigures networks during speech processing

Peer-Reviewed Publication

Reichman University

Professor Amir Amedi, senior author and head of the BCT Institute at Reichman University

image: 

Professor Amir Amedi, senior author and head of the BCT Institute at Reichman University

view more 

Credit: Oz Schechter

How does the brain manage to catch the drift of a mumbled sentence or a flat, robotic voice? A new study led by researchers at Reichman University’s Baruch Ivcher School of Psychology and the Dina Recanati School of Medicine has uncovered that  the human brain dynamically reconfigures large-scale neural networks during speech processing, offering new insights into the neural mechanisms underlying language comprehension.

The article, titled “Does nonsense make sense? A springboard to studying dynamic reconfiguration of large-scale networks during semantic and intonation speech”, was published in the prestigious peer-reviewed journal NeuroImage. The study examines how the brain responds to speech that varies in semantic content and intonation — including linguistically structured but nonsensical utterances — allowing researchers to isolate core principles of neural network flexibility.

The research was conducted at the Institute for Brain, Cognition and Technology (BCT) at Reichman University, combining expertise from cognitive neuroscience, neuroimaging, and systems-level brain analysis.

Using advanced functional MRI (fMRI) and analysis of functional connectivity, the research team demonstrated reorganization of large-scale neural networks during speech processing, modulated by the integrity of semantic content and prosodic structure.

“Speech comprehension is not supported by a single language network,” says Dr. Irina Anurova, first author of the study. “Instead, the brain dynamically reshapes communication between large-scale networks depending on the semantic and prosodic demands of the speech signal. Think of it like driving. On a clear, familiar road, you drive almost automatically. But if the road is full of obstacles or missing signs, you shift from autopilot to manual control. When listening to clear, expressive speech, the brain engages a specialized, left-lateralized language “autopilot" network that efficiently processes grammar and meaning.

However, when speech is degraded — either scrambled or monotonous — the brain switches to the "manual control" mode. It recruits more ancient, domain-general networks, such as the salience network (which acts as an alarm bell for unusual input) and the fronto-parietal executive network (which supports focused attention and working memory). In this mode, the brain actively assembles clues to infer meaning even from scrambled narratives”.

 

“Our study applied a rare combination of methods, looking at speech-related brain regions as well as communication between them during speech perception. In addition, most studies study either semantics or intonation in isolation”, comments dr. Katarzyna Ciesla, second author.

The findings highlight the brain’s remarkable adaptability and reinforce a growing shift in neuroscience toward dynamic, network-based models of cognition, emphasizing time-varying interactions rather than fixed regional specialization.

Broader Implications
Beyond advancing basic neuroscience, this work has important implications for understanding language-related disorders, including aphasia, neurodevelopmental conditions, and neurological or age-related changes in communication. By characterizing how brain networks adapt under challenging linguistic conditions, the study provides a framework for future clinical research, and therapeutics .

Professor Amir Amedi, senior author and head of the BCT Institute at Reichman University, notes:
“When cognition is challenged, the brain does not simply fail — it adapts. Studying how large-scale networks reorganize during speech processing gives us critical insight into the fundamental principles that support flexibility, resilience, and higher-order human communication.”


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.