News Release

How can the brain compete with AI?

The brain is a shallow architecture with only a few layers, yet current AI architectures are deep, consisting of many layers. Can brain inspired shallow architectures compete with deep architectures, and if so what is the underlying mechanism?

Peer-Reviewed Publication

Bar-Ilan University

Can the shallow brain architecture compete with deep learning?

video: 

In an article published today in Physica A, researchers from Bar-Ilan University in Israel show how shallow learning mechanisms can compete with deep learning. “Instead of a deep architecture, like a skyscraper, the brain consists of a wide shallow architecture, more like a very wide building with only very few floors,” said Prof. Ido Kanter, of Bar-Ilan's Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.

view more 

Credit: Prof. Ido Kanter, Bar-Ilan University

Neural network learning techniques stem from the dynamics of the brain. However, these two scenarios, brain learning and deep learning, are intrinsically different. One of the most prominent differences is the number of layers each one possesses. Deep learning architectures typically consist of numerous layers that can be increased to hundreds, enabling efficient learning of complex classification tasks. Contrastingly, the brain consists of very few layers, yet despite its shallow architecture and noisy and slow dynamics, it can efficiently perform complex classification tasks.

The key question driving new research is the possible mechanism underlying the brain’s efficient shallow learning -- one that enables it to perform classification tasks with the same accuracy as deep learning. In an article published today in Physica A, researchers from Bar-Ilan University in Israel show how such shallow learning mechanisms can compete with deep learning. “Instead of a deep architecture, like a skyscraper, the brain consists of a wide shallow architecture, more like a very wide building with only very few floors,” said Prof. Ido Kanter, of Bar-Ilan's Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.

“The capability to correctly classify objects increases where the architecture becomes deeper, with more layers. In contrast, the brain’s shallow mechanism indicates that a wider network better classifies objects,” said Ronit Gross, an undergraduate student and one of the key contributors to this work. “Wider and higher architectures represent two complementary mechanisms,” she added.  Nevertheless, the realization of very wide shallow architectures, imitating the brain’s dynamics, requires a shift in the properties of advanced GPU technology, which is capable of accelerating deep architecture, but fails in the implementation of wide shallow ones.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.