News Release

Artificial visual perception nervous system using solution-processable MoS2-based in-memory light sensor

Peer-Reviewed Publication

Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

(a). Human visual perception system, (b). CNN model, and (c). The confusion matrix

image: a, Schematic of the human visual-perception process: visual perception is one of the vital human senses where the brain decodes what the eyes see or sense. The human eye receives more than 80% of information through light. The brain visual-perception process is represented wherein the human eye receives light from an external source. This light is focused on the retina of the eye, which captures an image of the visual stimuli. Nerve cells present in the retina function as photoreceptors that convert light into electrical impulses. These impulses move from the optic nerve to the visual cortex at the back of the human brain. b, A small convolutional neural network (CNN) was designed to demonstrate the device's optical sensing and electrical programming abilities. For that, we extracted images from the Canadian Institute for Advanced Research (CIFAR)-10 dataset to make a simple binary image recognition, wherein the object “dog” and “automobile” were chosen as the classification tasks. The original images consists of three RGB channels of size 32×32×3, wherein each channel, discrete pixels are stored with three light intensities (Red, Green, and Blue). Our device showed the capability to sense blue light. Thus, we only extracted pixels of the blue channel for the recognition task. (c), The confusion matrix of the test results for 764 images in the CIFAR-10 dataset. The yellow-colored diagonal elements in the matrix represent the correctly identified cases. view more 

Credit: by Kumar, D., Joharji, L., Li, H. et al.

Artificial intelligence (AI) and the internet of things (IoT) have led to the rapid expansion of sensory nodes, which can produce an enormous volume of raw analog data that is converted into digital data, and then transmitted to other units to perform computational tasks. Nevertheless, the conventional von Neumann architecture which consists of discrete devices results in data access and data analysis delays with high power consumption. This can be troublesome for revolutionary applications with stringent delay and power requirements such as autonomous cars and robotics, among others.

In a new paper published in Light Science & Application, a team of scientists, led by Professor Nazek El-Atab from the Smart Advanced Memory devices and Applications (SAMA) laboratory, Electrical and Computer Engineering Program of King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia, and co-workers from Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE, have developed a single sensing-storage-processing node using a two-terminal solution-processable MoS2 based metal–oxide–semiconductor (MOS) device. The device embeds a light-sensitive 2D material-based charge-trapping layer that mimics the human visual system. More specifically, the same device is shown to be capable of optical data sensing, storage and processing. The study highlights how advanced technology-based in-memory sensing and computing can improve the response time, area, and energy efficiency, overcoming the delayed data access and hardware redundancy issues in conventional von Neumann architecture.

The scientists summarize the operational principle of their in-memory sensing devices:

“When the device is exposed to light, it would directly store the wavelength and intensity of light within it, as opposed to traditional devices where a dedicated photosensor would detect the intensity/wavelength of light, then convert the data into the digital domain using an analog-to-digital converter, and then transfer the data to a separate memory for storage.”

“When operated as a traditional memory, the device showed a decent memory window of approximately 2.8 V with an operating voltage of +6/-6 V, high-temperature retention (100°C) for 10 years, and excellent endurance (106 cycles) without any deterioration. Interestingly, the device showed a larger shift in the memory window from 2.8 V to more than 6 V when the optical light of different wavelengths was stimulated for 2 s during the program operation. This confirms that the device is able to sense light and to store it directly within the same node.” The scientists added.

“The effect of the number and duration of electrical and optical pulses on the memory window suggested that these devices can also mimick the perceptual learning of the human visual system. To confirm this, a convolutional neural network (CNN) was used to measure the device's optical sensing, storing and processing capabilities. The array simulation received optical images transmitted over the blue light wavelength and performed inference computation to process and recognize the images. The results shows that our devices are able to recognize the objects in the images with 91% accuracy.” The scientists explained.

“The determined approach is promising for the development of future artificial retina networks for artificial visual perception and in-memory light sensing applications. It should be noted that the demonstrated MOS memory devices use a similar structure as the Nobel Prize-winning charge-coupled devices (CCD) in CCD cameras, which makes this study a significant step towards the development of smart CCD cameras with artificial visual perception capabilities.” The scientists forecast.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.