News Release

Mom's voice activates many different regions in children's brains, Stanford study shows

Peer-Reviewed Publication

Stanford Medicine

Children's brains are far more engaged by their mother's voice than by voices of women they do not know, a new study from the Stanford University School of Medicine has found.

Brain regions that respond more strongly to the mother's voice extend beyond auditory areas to include those involved in emotion and reward processing, social functions, detection of what is personally relevant and face recognition.

The study, which is the first to evaluate brain scans of children listening to their mothers' voices, will be published May 16 in the Proceedings of the National Academy of Sciences. The strength of connections between the brain regions activated by the voice of a child's own mother predicted that child's social communication abilities, the study also found.

"Many of our social, language and emotional processes are learned by listening to our mom's voice," said lead author Daniel Abrams, PhD, instructor in psychiatry and behavioral sciences. "But surprisingly little is known about how the brain organizes itself around this very important sound source. We didn't realize that a mother's voice would have such quick access to so many different brain systems."

Preference for mom's voice

Decades of research have shown that children prefer their mother's voices: In one classic study, 1-day-old babies sucked harder on a pacifier when they heard the sound of their mom's voice, as opposed to the voices of other women. However, the mechanism behind this preference had never been defined.

"Nobody had really looked at the brain circuits that might be engaged," senior author Vinod Menon, PhD, professor of psychiatry and behavioral sciences, said. "We wanted to know: Is it just auditory and voice-selective areas that respond differently, or is it more broad in terms of engagement, emotional reactivity and detection of salient stimuli?"

The study examined 24 children ages 7 to 12. All had IQs of at least 80, none had any developmental disorders, and all were being raised by their biological mothers. Parents answered a standard questionnaire about their child's ability to interact and relate with others. And before the brain scans, each child's mother was recorded saying three nonsense words.

"In this age range, where most children have good language skills, we didn't want to use words that had meaning because that would have engaged a whole different set of circuitry in the brain," said Menon, who is the Rachael L. and Walter F. Nichols, MD, Professor.

Two mothers whose children were not being studied, and who had never met any of the children in the study, were also recorded saying the three nonsense words. These recordings were used as controls.

MRI scanning

The children's brains were scanned via magnetic resonance imaging while they listened to short clips of the nonsense-word recordings, some produced by their own mother and some by the control mothers. Even from very short clips, less than a second long, the children could identify their own mothers' voices with greater than 97 percent accuracy.

The brain regions that were more engaged by the voices of the children's own mothers than by the control voices included auditory regions, such as the primary auditory cortex; regions of the brain that handle emotions, such as the amygdala; brain regions that detect and assign value to rewarding stimuli, such as the mesolimbic reward pathway and medial prefrontal cortex; regions that process information about the self, including the default mode network; and areas involved in perceiving and processing the sight of faces.

"The extent of the regions that were engaged was really quite surprising," Menon said.

"We know that hearing mother's voice can be an important source of emotional comfort to children," Abrams added. "Here, we're showing the biological circuitry underlying that."

Children whose brains showed a stronger degree of connection between all these regions when hearing their mom's voice also had the strongest social communication ability, suggesting that increased brain connectivity between the regions is a neural fingerprint for greater social communication abilities in children.

'An important new template'

"This is an important new template for investigating social communication deficits in children with disorders such as autism," Menon said. His team plans to conduct similar studies in children with autism, and is also in the process of investigating how adolescents respond to their mother's voice to see whether the brain responses change as people mature into adulthood.

"Voice is one of the most important social communication cues," Menon said. "It's exciting to see that the echo of one's mother's voice lives on in so many brain systems."

###

Other Stanford authors of the study are Tianwen Chen, research associate; clinical research coordinators Paola Odriozola, Katherine Cheng and Amanda Baker; Aarthi Padmanabhan, PhD, postdoctoral scholar in psychiatry and behavioral sciences; Srikanth Ryali, PhD, instructor in psychiatry and behavioral sciences; John Kochalka, research assistant; and Carl Feinstein, MD, professor emeritus of psychiatry and behavioral sciences. Menon and Feinstein are members of Stanford's Child Health Research Institute.

The study was funded by the National Institutes of Health (grants K01MH102428, K25HD074652, DC011095 and MH084164), as well as by the Singer Foundation and the Simons Foundation. Stanford's Department of Psychiatry and Behavioral Sciences also supported the work.

Print media contact: Erin Digitale at (650) 724-9175 (digitale@stanford.edu)
Broadcast media contact: Margarita Gallardo at (650) 723-7897 (mjgallardo@stanford.edu)


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.