News Release

Key mechanism in the brain's computation of sound location identified

Peer-Reviewed Publication

PLOS

Animals can locate the source of a sound by detecting microsecond (one millionth of a second) differences in arrival time at their two ears. New York University researchers have identified a mechanism the brain uses to help process sound localization. This group of scientists found that one reason these neurons are able to perform such a rapid and sensitive computation is because they are extremely responsive to the input's "rise time"—the time it takes to reach the peak of the synaptic input. The findings will publish next week in the online, open access journal PLoS Biology.

The brain computes the different arrival times of sound into each ear to estimate the location of its source. The neurons encoding these differences—called interaural time differences (ITDs)—receive a message from each ear. After receiving these messages, or synaptic inputs, they perform a microsecond computation to determine the source of sound. Existing theories have held that the biophysical properties of the two inputs are identical—that is, messages coming from each ear are rapidly processed at the same time and in the same manner by neurons. The NYU researchers challenged this theory by focusing on the nature of the neurons and the inputs—specifically, how sensitive they were in detecting differences in inputs' rise times and also how different these rise times are between the messages arriving from each ear.

Bouyed by predictions from their computer modeling work, the researchers examined this process in gerbils, which are good candidates for study because they process sounds in a similar frequency range and with apparently similar neuro-architecture as humans. Their initial experimental findings were obtained through examination of the gerbils' neuronal activity in the part of the brain in charge of this task. They showed that the rise times of the synaptic inputs coming from the two ears occur at different speeds: the rise time of messages coming from the ipsilateral ear are faster than those driven by the contralateral ear (the brain has two groups of neurons that compute this task, one group in each brain hemisphere—ipsilateral messages come from the same-side ear and the contralateral messages come from opposite-side ear). In addition, they found that the arrival time of the messages coming from each ear were different. This finding was not surprising as the distance from these neurons to each ear is not symmetric. Other researchers assumed that such asymmetry exists but it has never been measured and reported previous to this study. Given this newfound complexity of the way sound reaches the neurons in the brain, the researchers concluded that neurons did not have the capacity to process it in the way previously theorized.

Key insights about how these neurons actually function in processing sound coming from both ears were obtained by using the computer model. Their results identified that neurons perform the computation differently than what neuroscientists had proposed previously. These neurons not only encode the coincidence in arrival time of the two messages from each ear, but they also detect details on the input's shape more directly related to the time scale of the computation itself than other features proposed in previous studies.

"Some neurons in the brain respond to the net amplitude and width of summed inputs; they are integrators," explained Pablo Jercog and John Rinzel, two of the study's co-authors: "These auditory neurons respond to the rise time of the summed input and care less about the width. They are differentiators—key players on the brain's calculus team for localizing a sound source."

###

Jercog is a former graduate student at NYU's Department of Physics and Center for Neural Science and now a post-doctoral fellow at Columbia University; John Rinzel is a professor in the Center for Neural Science and the Courant Institute of Mathematical Sciences and, Dan Sanes is a professor in NYU's Department of Biology and Center for Neural Science

The study's other authors were: Gytis Svirskis, a former researcher at the Center for Neural Science; Vibhakar Kotak, a research associate professor at the Center for Neural Science.

Funding: This work is supported by National Institutes of Health (NIH) grants DC008543 and MH62595 to PEJ, GS, and JR and also NIH grant DC006864 to DHS and VCK. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests statement: The authors declare that no competing interests exist.

Citation: Jercog PE, Svirskis G, Kotak VC, Sanes DH, Rinzel J (2010) Asymmetric Excitatory Synaptic Dynamics Underlie Interaural Time Difference Processing in the Auditory System. PLoS Biol 8(6): e1000406. doi:10.1371/journal.pbio.1000406

PLEASE ADD THE LINK TO THE PUBLISHED ARTICLE IN ONLINE VERSIONS OF YOUR REPORT: http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pbio.1000406

PRESS ONLY PREVIEW OF THE ARTICLE: http://www.plos.org/press/plbi-08-06-Rinzel.pdf

RELATED PRIMER: http://www.plos.org/press/plbi-08-06-RinzelPrimer.pdf


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.