News Release 

How the brain separates words from song

American Association for the Advancement of Science

The perception of speech and music - two of the most uniquely human uses of sound - is enabled by specialized neural systems in different brain hemispheres adapted to respond differently to specific features in the acoustic structure of the song, a new study reports. Though it's been known for decades that the two hemispheres of our brain respond to speech and music differently, this study used a unique approach to reveal why this specialization exists, showing it depends on the type of acoustical information in the stimulus. Music and speech are often inextricably entwined and the ability for humans to recognize and separate words from melodies in a single continuous soundwave represents a significant cognitive challenge. It's thought that the perception of speech strongly relies on the ability to process short-lived temporal modulations, and for melody, the detailed spectral composition of sounds, such as fluctuations in frequency. Previous studies have proposed a left- and right-hemisphere neural specialization for handling speech and music information, respectively. However, whether this brain asymmetry stems from the different acoustical cues of speech and music or from domain-specific neural networks remains unclear. By combining ten original sentences with ten original melodies, Philippe Albouy and colleagues created a collection of 100 unique a cappella songs, which contained acoustic information in both the temporal (speech) and spectral (melodic) domain. The nature of the recordings allowed the authors to manipulate the songs and selectively degrade each in either the temporal or spectral domain. Albouy et al. found that degradation of temporal information impaired speech recognition but not melody recognition. On the other hand, perception of melody decreased only with spectral degradation of the song. Concurrent fMRI brain scanning revealed asymmetrical neural activity; decoding of speech content occurred primarily in the left auditory cortex, while melodic content was handled primarily in the right. In a related Perspective, Daniela Sammler discusses the study's findings in more detail.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.