News Release

Researchers from Gwangju Institute of Science and Technology Develop a New Method for Denoising Images

The approach involves a post-correction network optimized by a self-supervised machine learning framework to improve the quality of unfamiliar images

Reports and Proceedings

GIST (Gwangju Institute of Science and Technology)

Towards better image denoising with a self-supervised post-correction network.

image: Researchers from Gwangju Institute of Science and Technology in Korea, VinAI Research in Vietnam, and University of Waterloo in Canada have proposed a new method to improve the quality of path-traced visuals using a post-correction network and a self-supervised machine learning framework. The model can be trained on the fly to output high-quality images in just 12 seconds. view more 

Credit: Bochang Moon from Gwangju Institute of Science and Technology, Korea

High-quality computer graphics, with their ubiquitous presence in games, illustrations, and visualization, are considered state-of-the-art in visual display technology. The method used to render high-quality and realistic images is known as “path tracing,” which makes use of a Monte Carlo (MC) denoising approach based on supervised machine learning. In this learning framework, the machine learning model is first pre-trained with noisy and clean image pairs and then applied to the actual noisy image to be rendered (test image). While considered to be the best approach in terms of image quality, this method may not work well if the test image is markedly different from the images used for training.

To address this problem, a group of researchers, including Ph.D. student Jonghee Back and Associate Professor Bochang Moon from Gwangju Institute of Science and Technology in Korea, Research Scientist Binh-Son Hua from VinAI Research in Vietnam, and Associate Professor Toshiya Hachisuka from University of Waterloo in Canada, proposed, in a new study, a new MC denoising method that does not rely on a reference. Their study was made available online on 24 July 2022 and published in ACM SIGGRAPH 2022 Conference Proceedings.

“The existing methods not only fail when test and train datasets are very different but also take long to prepare the training dataset for pretraining the network. What is needed is a neural network that can be trained with only test images on the fly without the need for pretraining,” says Dr. Moon, explaining the motivation behind their study.

To accomplish this, the team proposed a new post-correction approach for a denoised image that comprised a self-supervised machine learning framework and a post-correction network, basically a convolutional neural network, for image processing. The post-correction network did not depend on a pre-trained network and could be optimized using the self-supervised learning concept without relying on a reference. Additionally, the self-supervised model complemented and boosted the conventional supervised models for denoising.

To test the effectiveness of the proposed network, the team applied their approach to the existing state-of-the-art denoising methods. The proposed model demonstrated a three-fold improvement in the rendered image quality relative to the input image by preserving finer details. Moreover, the entire process of on the fly training and final inference took only 12 seconds!

“Our approach is the first that does not rely on pre-training using an external dataset. This, in effect, will shorten the production time and improve the quality of offline rendering-based content such as animation and movies,” remarks Dr. Moon, speculating about the potential applications of their work.

Indeed, it may not be long before this technique finds use in high-quality graphics rendering in video games, augmented reality, virtual reality, and metaverse!

 

***

 

Reference

DOI: https://doi.org/10.1145/3528233.3530730

 

Authors: Jonghee Back1, Binh-Son Hua2, Toshiya Hachisuka3, and Bochang Moon1

 

Affiliations:

1Gwangju Institute of Science and Technology

2VinAI Research

3University of Waterloo

 

About the Gwangju Institute of Science and Technology (GIST)
The Gwangju Institute of Science and Technology (GIST) is a research-oriented university situated in Gwangju, South Korea. Founded in 1993, GIST has become one of the most prestigious schools in South Korea. The university aims to create a strong research environment to spur advancements in science and technology and to promote collaboration between international and domestic research programs. With its motto of “A Proud Creator of Future Science and Technology,” GIST has consistently received one of the highest university rankings in Korea.

Website: http://www.gist.ac.kr/

 

About the authors

Jonghee Back is a Ph.D. student at the Computer Graphics Lab in the School of Integrated Technology at GIST. His research interests are in the field of artificial intelligence, with a focus on physically based rendering.

Binh-Son Hua is a Research Scientist at VinAI Research, Vietnam. He received his Ph.D. from the National University of Singapore in 2015. His research interests lie in computer graphics and computer vision, focusing on physically-based image synthesis and 3D deep learning.

Toshiya Hachisuka is currently an Associate Professor at the David R. Cheriton School of Computer Science, the University of Waterloo in Canada. He received his Ph.D. in Computer Science from the University of California, San Diego in 2011. His research interests comprise computer graphics rendering, light transport, Monte Carlo methods, and numerical computation.

Bochang Moon is an Associate Professor in the School of Integrated Technology at GIST, where he heads the Computer Graphics Lab. He received his Ph.D. in Computer Science from the Korea Advanced Institute of Science & Technology in 2014. His research interests include photorealistic rendering, Monte Carlo ray tracing, rendering using artificial intelligence, and augmented and virtual reality.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.