News Release

How the brain sorts babble into auditory streams

Peer-Reviewed Publication

Cell Press

Known as "the cocktail party problem," the ability of the brain's auditory processing centers to sort a babble of different sounds, like cocktail party chatter, into identifiable individual voices has long been a mystery.

Now, researchers analyzing how both humans and monkeys perceive sequences of tones have created a model that can predict the central features of this process, offering a new approach to studying its mechanisms.

The research team--Christophe Micheyl, Biao Tian, Robert Carlyon, and Josef Rauschecker--published their findings in the October 6, 2005, issue of Neuron.

For both the humans and the monkeys, the researchers used an experimental method in which they played repetitive triplet sequences of tones of two alternating frequencies. Researchers know that when the frequencies are close together and alternate slowly, the listener perceives a single stream that sounds like a galloping horse. However when the tones are at widely separated frequencies or played in rapid succession, the listener perceives two separate streams of beeps.

Importantly, at intermediate frequency separations or speeds, after a few seconds the listeners' perceptions can shift from the single galloping sounds to the two streams of beeps. The researchers could use this phenomenon to explore the neurobiology of perception of auditory streams, because they could explore how perception altered with the same stimulus.

In the human studies, Micheyl, working in the MIT laboratory of Andrew Oxenham, asked subjects to listen to such tone sequences and signal when their perceptions changed. The researchers found that the subjects showed the characteristic perception changes at the intermediate frequency differences and speeds.

Then, Tian, working in Rauschecker's laboratory at Georgetown University Medical Center, recorded signals from neurons in the auditory cortex of monkeys as the same sequences of tones were played to the animals. These neuronal signals could be used to indicate the monkeys' perceptions of the tone sequences.

From the data on the monkeys, the researchers developed a model that aimed to predict in humans the change in perception between one or two auditory streams under different frequency separations and tone presentation rates.

"Using this approach, we demonstrate a striking correspondence between the temporal dynamics of neural responses to alternating-tone sequences in the primary cortex…of awake rhesus monkeys and the perceptual build-up of auditory stream segregation measured in humans listening to similar sound sequences," concluded the researchers.

In a commentary on the paper in the same issue of Neuron, Michael DeWeese and Anthony Zador wrote that the new approach "promises to elucidate the neural mechanisms underlying both our conscious experience of the auditory world and our impressive ability to extract useful auditory streams from a sea of distracters."

###

The researchers include Christophe Micheyl of Massachusetts Institute of Technology; Biao Tian and Josef P. Rauschecker of Georgetown University Medical Center; and Robert P. Carlyon of MRC Cognition and Brain Sciences Unit. The research was supported by an Engineering and Physical Sciences Research Council Research grant via HearNet, NIH grants, and CNRS.

Micheyl et al.: "Perceptual organization of tone sequences in the auditory cortex of awake macaques." Published in Neuron, Vol. 48, 139–148, October 6, 2005, DOI 10.1016/j.neuron.2005.08.039 www.neuron.org.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.