□ A research team led by Dr. Jinung An of the Division of Intelligent Robotics at DGIST (President Kunwoo Lee) has developed a new AI foundation model that creatively solved the problem of the ‘label data shortage,’ regarded as the biggest challenge in deep learning-based brain signal analysis. This technology is designed to self-learn brain signals and is gaining attention for its ability to deliver high accuracy with very small amounts of labels.
□ This research was jointly conducted by Dr. Jinung An (Principal Research Engineer, Division of Intelligent Robot, Concurrent Professor for Interdisciplinary Engineering Major) and Postdoctoral Researchers Uijin Jeong and Sihu Park (Robotics and Mechatronics Research Institute, Research Group for Bio-Embodied Physical AI). Their key achievement is the world's first development of an ‘EEG-fNIRS Multimodal Foundation Model’ capable of understanding and analyzing both EEG (encephalogram) and fNIRS (functional near-infrared spectroscopy) signals.
□ The research team secured approximately 1,250 hours of ultra-large brain signal data from 918 individuals and trained a model in an unsupervised manner, without labels. It was designed to simultaneously identify the characteristics unique to EEG and fNIRS and the potential expressions shared between the two signals.
□ In particular, while there were significant limitations in developing multimodal AI before, such as it being nearly impossible to obtain simultaneous EEG and fNIRS measurements, the model developed in this research was designed to train without needing simultaneous measurement data. Furthermore, it achieves high accuracy with just a small amount of labels and can perform EEG-only analysis, fNIRS-only analysis, and multimodal analysis that combines both signals using a single model, fully overcoming the structural limitations of existing technologies.
□ Dr. Jinung An stated, "This study is the first framework to overcome the structural limitations of multimodal brain signal analysis, achieving a fundamental innovation in the field of brain signal AI. In particular, the contrastive learning strategy, which aligns shared information between two signals, significantly expands the model's expressive power, and this will serve as a crucial turning point for the development of future brain engineering technologies, such as brain-inspired AI and brain-computer interfaces (BCIs)."
□ This research was funded by the Ministry of Science and ICT and the National Research Foundation of Korea, and the results were published in Computers in Biology and Medicine, a well-known international journal in computational biology and medical informatics.
Journal
Computers in Biology and Medicine
Article Title
EFRM: A Multimodal EEG–fNIRS Representation-learning Model for few-shot brain-signal classification
Article Publication Date
11-Nov-2025