In a new paper published in eLight, a team of scientists led by Professor Bahram Jalali and graduate student Callen MacPhee from UCLA have developed a new algorithm for performing computational imaging tasks. The paper “VEViD: Vision Enhancement via Virtual diffraction and coherent Detection” utilizes a physics-based algorithm to correct for poor illumination and low contrast in images captured in low-light conditions.
In such conditions, digital images often incur undesirable visual qualities such as low contrast, feature loss, and poor signal-to-noise ratio. Low-light image enhancement aims to improve these qualities for two purposes: increased visual quality for human perception and increased accuracy of computer vision algorithms. In the former, real-time processing can serve as a boon for convenient viewing. In the latter, it is a requirement for emerging applications such as autonomous vehicles and security where image processing must be completed with low latency.
The paper shows that physical diffraction and coherent detection can be used as a toolbox for the transformation of digital images and videos. This approach leads to a new and surprisingly powerful algorithm for low-light and color enhancement. Unlike traditional algorithms that are mostly hand-crafted empirical rules, the VEViD algorithm emulates physical processes. In contrast to deep learning-based approaches, this technique is unique in having its roots in deterministic physics. The algorithm is interpretable and does not require labeled data for training. The authors explain that although the mapping to physical processes is not precise, it may be possible in the future to implement a physical device that executes the algorithm in the analog domain.
The paper demonstrates the high performance of VEViD in several imaging applications such as security cameras, night-time driving, and space exploration. Also demonstrated is VEViD’s ability to perform color enhancement. The algorithm’s exceptional computational speed is proven by processing 4k video at over 200 frames per second. Comparison with leading deep learning algorithms shows comparable or better image quality but with one to two orders of magnitude faster processing speed.
Deep neural networks have proven powerful tools for object detection and tracking, and they are the key to several emerging technologies that leverage autonomous machines. The authors show the utility of VEViD as pre-processing tool that increases the accuracy of object detection by a popular neural network (YOLO). Processing an image first by VEViD allows neural networks trained on daylight images to recognize objects in night-time environments without retraining, making these networks more robust while saving vast amounts time and energy.