Herzliya, Israel- Researchers from the Baruch Ivcher Institute for Brain, Cognition and Technology at Reichman University’s Baruch Ivcher School of Psychology have developed a special technology that helps people understand speech and sound – and which in the future will allow them to detect their location as well – by using touch. The technology is based on the premise that everything we experience in life is multi-sensory, and therefore the information transmitted to the brain is received in several different areas. If one sense is not working properly, another sense can be trained to cover it.
We often find ourselves in situations that make it difficult for us to understand what we are being told. This can be due to the person speaking to us having a soft voice or speech that is difficult to comprehend, a noisy environment, or, as has been the case in the COVID-19 reality, people being required to communicate from behind a face mask. While these situations are difficult for anyone, they are impossible for the hearing-impaired or deaf. This new technology is designed to help in such cases.
In an experiment conducted by the researchers, 40 subjects with normal hearing were asked to repeat sentences that had been distorted, simulating hearing via a cochlear implant in a noisy environment. In some cases, vibrations on the fingertips corresponding to lower speech frequencies were added to the sentences. To simulate these frequencies, the researchers developed a system that converts sound frequencies to vibrations – an audio-tactile sensory substitution device (SSD). SSDs are systems that allow the conversion of input from one sense to another (hearing to touch, sight to hearing, etc.).
The first finding from the experiment showed that the subjects' level of understanding increased over the course of a 45-minute training period accompanied by visual feedback. Following the training, the participants were able to understand a new set of sentences in a noisier environment, and in conditions that made speech more difficult to understand. Performance improved significantly when the participants received a corresponding vibration in addition to the audio. The research clearly demonstrated that the technology is successful in improving speech comprehension.
Head of the Baruch Ivcher Institute for Brain, Cognition and Technology at Reichman University’s Baruch Ivcher School of Psychology, Prof. Amir Amedi: “With great respect for the world of neuroscience, we believe that the adult brain can also learn, in a relatively simple way, to use one combination of senses or another to better understand situations. This assumption is consistent with the institute's previous findings, which have shown that the brain is not divided into separate areas of specialization according to the senses, but rather according to the performance of tasks; so, for example, speech can also be understood through touch, not just through hearing. This system is helpful both for those who are hard of hearing, as well as for people who are trying to understand what is being said on a phone call or who are learning new languages.”
The postdoctoral fellow at the Baruch Ivcher Institute for Brain, Cognition and Technology and co-director of the Ruth and Meir Rosental Brain Imaging Center at Reichman University, Dr. Katarzyna Cieśla: “Right now, the next phase of our research is being carried out with people who are hearing-impaired and completely deaf. At this stage, the sensory intervention will be individually tailored to each of the participants, as a combination of sound and vibration, or, for the deaf, vibration alone before the implantation a cochlear implant; this is in order to establish their understanding of speech with the help a changing vibration.”
The full article was published in Scientific Reports, a Nature Portfolio journal. www.nature.com/articles/s41598-022-06855-8.pdf
Journal
Scientific Reports
Method of Research
Experimental study
Subject of Research
People
Article Title
Efects of training and using an audio‑tactile sensory substitution device on speech‑in‑noise understanding
Article Publication Date
25-Feb-2022