News Release

Artificial neurons help decode cortical signals

Peer-Reviewed Publication

National Research University Higher School of Economics

Alexey Ossadtchi

image: Alexey Ossadtchi, director of the HSE Center for Bioelectric Interfaces view more 

Credit: Alexey Ossadtchi

Russian scientists have proposed a new algorithm for automatic decoding and interpreting the decoder weights, which can be used both in brain-computer interfaces and in fundamental research. The results of the study were published in the Journal of Neural Engineering.

Brain-computer interfaces are needed to create robotic prostheses and neuroimplants, rehabilitation simulators, and devices that can be controlled by the power of thought. These devices help people who have suffered a stroke or physical injury to move (in the case of a robotic chair or prostheses), communicate, use a computer, and operate household appliances. In addition, in combination with machine learning methods, neural interfaces help researchers understand how the human brain works.

Most frequently brain-computer interfaces use electrical activity of neurons, measured, for example, with electro- or magnetoencephalography. However, a special decoder is needed in order to translate neuronal signals into commands. Traditional methods of signal processing require painstaking work on identifying informative features--signal characteristics that, from a researcher's point of view, appear to be most important for the decoding task.

Initially, the authors focused on electrocorticography (ECoG) data--an invasive recording of neural activity with electrodes located directly on the cortical surface under the dura mater, a shell that encapsulates brain, -- and developed an artificial neural network architecture that automates the extraction of interpretable features.

As conceived by the scientists, the neural network algorithm should not be too complicated in terms of the number of parameters. It should be automatically tuned and enable one to interpret the learned parameters in physiologically meaningful terms. The last requirement is especially important: if it is met, the neural network can be used not only to decode signals, but also to gain new insights into the neuronal mechanisms, the dream come true for neuroscientists and neurologists. Therefore, in addition to a new neural network for signal processing, the authors proposed (and theoretically justified) a method for interpreting the parameters of the broad class of neural networks.

The neural network proposed by the researchers consists of several similarly structured branches, each of which is automatically tuned to analyze the signals of a separate neural population in a certain frequency range and is tuned away from interference. To do this, they use convolutional layers similar to those that comprise neural networks, sharpened for image analysis, and which act as spatial and frequency filters. Knowing the weights of the spatial filter, it is possible to determine where the neural population is located, and the temporal convolution weights show how the neuron activity changes over time in addition to indirectly indicating the neuronal population size.

To assess the performance of their neural network in combination with a new method for interpreting its parameters, the scientists first generated a set of realistic model data, or 20 minutes of activity from 44 neuron populations. Noise was added to the data to simulate interference when recording signals in real conditions. The second set of data to check was a dataset from the BCI Competition IV, containing the ECoG data of several subjects who periodically moved their fingers spontaneously. Another set of ECoG data was collected by the scientists themselves at the Moscow State University of Medicine and Dentistry, which serves as the clinical base of the Centre for Bioelectric Interfaces of HSE University. Unlike previous data, the records collected by the scientists contained complete geometric information about the location of the ECoG electrodes on the surface of each patient's cerebral cortex. This made it possible to interpret the weights of the spatial filters learned by the neural network and to discern somatotopy (i.e., the relationship between the neural population's position on the cerebral cortex and the body part it functionally corresponds to) in the location of the neuron populations pivotal for decoding the movement of each finger.

The neural network performed nicely: with the BCI Competition IV dataset, it worked on par with the solution proposed by the competition winners, but, unlike the solution, it used automatically selected features. While working with both real and model data, the researchers proved that it is possible to interpret the scale parameters correctly and in detail, and the interpretation gives physiologically plausible results. The researchers also applied a new technique to the classification of imaginary movements based on non-invasive (obtained from the surface of the head, without implanting electrodes) EEG data. As in the case of ECoG, the neural network provided high decoding accuracy and feature interpretability.

'We are already using this approach to build invasive brain-computer interfaces, as well as to solve issues of preoperative cortex mapping, which is necessary to ensure that key behavioral functions are preserved after brain surgery', says the scientific lead of the study and the director of the HSE Center for Bioelectric Interfaces, Alexei Ossadcthi. 'In the nearest future, the developed technique will be used to automatically extract knowledge about the principles according to which the brain implements a broad range of behavioral functions.'

###

The study involved researchers of the Centre for Bioelectrical Interfaces of HSE University and the Moscow State University of Medicine and Dentistry


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.