News Release

Seeing through random diffusers instantly without a computer

Peer-Reviewed Publication

Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Computational Imaging Without a Computer

image: A diffractive network can image objects through random diffusers at the speed of light. view more 

Credit: by Yi Luo, Yifan Zhao, Jingxi Li, Ege Çetintaş, Yair Rivenson, Mona Jarrahi, and Aydogan Ozcan

Imaging through scattering and diffusive media has been a challenge for many decades, with numerous solutions reported so far. In principle, images distorted by random diffusers (such as frosted glass) can be recovered using a computer. However, existing methods rely on sophisticated algorithms and codes running on computers that digitally process the distorted images to correct them.

 

Adaptive optics-based methods have also been applied in different scenarios to see through diffusive media. With significant advances in wavefront shaping, wide-field real-time imaging through turbid media became possible. In addition to digital computers, they require guide-stars or known reference objects, introducing additional complexity to an imaging system. As another alternative approach, deep neural networks were trained using image pairs composed of distorted objects and their corresponding distortion-free images. This method taught deep neural networks to reconstruct distorted images using computers.

 

A new paper published in eLight has sought an entirely new paradigm to image objects through diffusive media. In their paper, entitled "Computational Imaging Without a Computer: Seeing Through Random Diffusers at the Speed of Light," UCLA researchers, led by Professor Aydogan Ozcan, presented a new method to immediately see through random diffusive media without the need for any digital processing. This new approach is computer-free and all-optically reconstructs object images distorted by unknown, randomly-generated phase diffusers.

 

To achieve this, they trained a set of diffractive surfaces or transmissive layers using deep learning to optically reconstruct the image of an unknown object placed entirely behind a random diffuser. The diffuser-distorted input optical field diffracts through successive trained layers— the image reconstruction process is completed at the speed of light propagation through the diffractive layers. Each trained diffractive surface has tens of thousands of diffractive features (called neurons) that collectively compute the desired image at the output.

 

During the training, many different and randomly-selected phase diffusers were used to help generalize the optical network. After this one-time deep learning-based design, the resulting layers are fabricated and put together to form a physical network positioned between an unknown, new diffuser and the output/image plane. The trained network collected the scattered light behind the random diffuser to reconstruct an image of the object all optically.

 

There is no need for a computer or digital reconstruction algorithm to image through an unknown diffuser. In addition, this diffractive processor does not use any external power source, aside from the light that illuminates the object behind the diffuser.

 

The research team experimentally validated the success of this approach using Terahertz waves. They fabricated their designed diffractive networks with a 3D printer to demonstrate the capability to see through randomly-generated phase diffusers never used during training. The team also improved object reconstruction quality using deeper diffractive networks with additional fabricated layers, one layer following another.

 

The all-optical image reconstruction achieved by these passive diffractive layers allowed the team to see objects through unknown random diffusers. It presents an extremely low power solution compared with existing deep learning-based or iterative image reconstruction methods that use digital computers.

 

The researchers believe that their method could be applied to other parts of the electromagnetic spectrum, including the visible and far/mid-infrared wavelengths. The reported proof-of-concept results represent a thin and random diffuser layer. The team believes these underlying methods can potentially be extended to see through volumetric diffusers such as fog.

 

This approach can enable significant advances in fields where imaging through diffusive media is of utmost importance. Those fields include biomedical imaging, astronomy, autonomous vehicles, robotics, and defense/security applications.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.