Using robotics to manipulate the brain's perception of jaw movement while words are spoken, researchers have deepened our understanding of the importance of non-auditory sensory cues in the brain's control of speech. The findings are reported by Sazzad Nasir and David Ostry of McGill University and appear in the October 10th issue of the journal Current Biology, published by Cell Press.
When we speak, our ability to effectively produce words is dependent not only on auditory feedback signals to the brain, but also on so-called somatosensory information that informs the brain of the relative positioning of different parts of the body--a process known as proprioception. Cues of this sort that might be relevant during speech include those that inform the brain of the openness of the jaw or the changing positions of the tongue or lips.
To investigate how such somatosensory cues are used during speech production, the researchers in the new work were able to dissociate the contribution of these cues from auditory cues by using a robotic device that slightly altered the path of the jaw's motion at different points during speech, but did not significantly disrupt the acoustic quality of the words being spoken. The researchers were able to manipulate jaw motion at specific points during speaking and were thereby able to specifically target vowel or consonant sounds to study whether the production of certain types of sound was especially sensitive to somatosensory cues. The researchers found that over time, the subjects in the experiments learned to compensate for the robotic interference, thereby "correcting" the somatosensory feedback the brain receives during speech. This learning took place even when speech sounded normal, and it occurred when the robotic interference was applied during both vowel and consonant sound production.
The findings support the idea that accurate acoustic quality is not the brain's only goal during the motor control of speech--precision in expected somatosensory feedback cues is also an important endpoint.
The researchers include Sazzad M. Nasir of McGill University in Montreal, Québec, Canada and David J. Ostry of McGill University in Montreal, Québec, Canada and Haskins Laboratories in New Haven, Connecticut.
This research was supported by National Institute of Deafness and Communicative Disorders grant DC-04669, the Natural Sciences and Engineering Research Council of Canada, and Fonds québécois de la recherche sur la nature et les technologies.
Nasir et al.: "Somatosensory Precision in Speech Production." Publishing in Current Biology 16, 1918-1923, October 10, 2006 DOI 10.1016/j.cub.2006.07.069. www.current-biology.com