image: VFF-Net introduces three new methodologies: label-wise noise labelling (LWNL), cosine similarity-based contrastive loss (CSCL), and layer grouping (LG), addressing the challenges of applying a forward-forward network for training convolutional neural networks.
Credit: Hyung Kim from Seoul National University and Technology
Deep neural networks (DNNs), which power modern artificial intelligence (AI) models, are machine learning systems that learn hidden patterns from various types of data, be it images, audio or text, to make predictions or classifications. DNNs have transformed many fields with their remarkable prediction accuracy. Training DNNs typically relies on back-propagation (BP). While it has become indispensable for the success of DNNs, BP has several limitations, such as slow convergence, overfitting, high computational requirements, and its black box nature. Recently, forward-forward networks (FFN) have emerged as a promising alternative, where each layer is trained individually, bypassing BP. However, applying FFNs to convolutional neural networks (CNNs), which are widely used for image analysis, has proven difficult.
To address this challenge, a research team led by Mr. Gilha Lee and Associate Professor Hyun Kim from the Department of Electrical and Information Engineering at Seoul National University of Science and Technology has developed a new training algorithm, called visual foward-forward network (VFF-Net). The team also included Mr. Jin Shin. Their study was made available online on June 16, 2025, and published in Volume 190 of the journal Neural Networks on October 01, 2025.
Explaining the challenge of FNN for training CNN, Mr. Lee says, “Directly applying FFNs for training CNNs can cause information loss in input images, reducing accuracy. Furthermore, for general purpose CNNs with numerous convolutional layers, individually training each layer can cause performance issues. Our VFF-Net effectively addresses these issues.”
VFF-Net introduces three new methodologies: label-wise noise labelling (LWNL), cosine similarity-based contrastive loss (CSCL), and layer grouping (LG). In LWNL, the network is trained on three types of data: the original image without any noise, positive images with correct labels, and negative images with incorrect labels. This helps eliminate the loss of pixel information in the input images.
CSCL modifies the conventional goodness-based greedy algorithm, applying a contrastive loss function based on the cosine similarity between feature maps. Essentially, it compares the similarity between two feature representations based on the direction of the data patterns. This helps preserve the meaningful spatial information necessary for image classification. Finally, LG solves the problem of individual layer training by grouping layers with the same output characteristics and adding auxiliary layers, significantly improving performance.
Thanks to these innovations, VFF-Net significantly improves image classification performance compared to conventional FFNs. For a CNN model with four convolutional layers, test errors on the CIFAR-10 and CIFAR-100 datasets were reduced by 8.31% and 3.80%, respectively. Additionally, the fully connected layer-based VFF-Net achieved a test error of just 1.70% on the MNIST dataset.
“By moving away from BP, VFF-Net paves the way toward lighter and more brain-like training methods that do not need extensive computing resources,” says Dr. Kim. “This means powerful AI models could run directly on personal devices, medical devices, and household electronics, reducing reliance on energy-intensive data centres and making AI more sustainable.”
Overall, VFF-Net will allow AI to become faster and cheaper, while allowing more natural, brain-like learning, facilitating more trustworthy AI systems.
***
Reference
DOI: 10.1016/j.neunet.2025.107697
About the Institute Seoul National University of Science and Technology (SEOULTECH)
Seoul National University of Science and Technology, commonly known as 'SEOULTECH,' is a national university located in Nowon-gu, Seoul, South Korea. Founded in April 1910, around the time of the establishment of the Republic of Korea, SEOULTECH has grown into a large and comprehensive university with a campus size of 504,922 m2.
It comprises 10 undergraduate schools, 35 departments, 6 graduate schools, and has an enrollment of approximately 14,595 students.
Website: https://en.seoultech.ac.kr/
About Associate Professor Hyun Kim
Dr. Hyun Kim is an Associate Professor in the Department of Electrical and Information Engineering at Seoul National University of Science and Technology, South Korea, and leads the Intelligent Digital Systems Design Lab (IDSL). His research focuses on artificial intelligence and hardware-efficient computing architectures, emphasizing AI semiconductors, system-on-chip, and accelerator design. He also serves on editorial boards and program committees for leading international journals and conferences.
About Mr. Gilha Lee
Mr. Gilha Lee is a Ph.D. candidate in the in the Department of Electrical and Information Engineering at Seoul National University of Science and Technology, South Korea. His research focuses on deep learning algorithms, neural network optimization, and training methods beyond backpropagation, with broader interests in model compression, lightweighting, and FPGA-based acceleration for resource-constrained environments.
Journal
Neural Networks
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
VFF-Net: Evolving forward–forward algorithms into convolutional neural networks for enhanced computational insights
Article Publication Date
1-Oct-2025
COI Statement
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.