News Release

Ultra-thin 3D lensless fiber endoscopy using diffractive optical elements and deep neural networks

Peer-Reviewed Publication

Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Working principle of miniaturized endoscopes

image: a, Each fiber core exhibits a random phase delay, which adds to the in-coupled wave front and results in a high spatial frequency disturbance at the fiber output. b, phase distortions are compensated using a diffractive optical element (DOE). Focusing at the distal fiber side is performed by adding the phase structure of a Fresnel lens on the DOE at the proximal fiber side. c, Scheme and principle of a diffuser endoscope. The diffuser at the distal side codes the 3D object information into a 2D speckle pattern, which is transferred through the CFB to the proximal side. The 3D information is recovered in real time using a neural network. view more 

Credit: by Robert Kuschmierz and Juergen W Czarske

Miniaturized endoscopes with micron resolution, sub-millimeter diameters and 3D imaging capabilities play an important role in medical imaging and diagnostics. Common flexible endoscopes are based on coherent fiber bundles (CFB), also called multi-core fibers, which relay intensity patterns from the hidden region at the distal fiber facet to the instrument at the proximal fiber facet. A lens system at the distal fiber end (de-)magnifies the core-to-core distance and defines the resolution. CFB offer diameters down to a few hundred microns for minimally invasive access. However, the distal optics increase the footprint of the endoscope, usually in the millimeter range. This is critical for several biomedical applications, for instance in the brain. Furthermore, CFBs exhibit strong phase distortions. The light phase is crucial however, since it contain an objects depth information. In recent years, multiple approaches have been presented to compensate these distortions, using programmable optics, to enable the undistorted transfer of intensity and phase for 3D imaging without distal optics. The proposed setups suffer from high complexity and high costs, however.

 

In a new paper published in Light Science & Application, a team of scientists, led by Dr. Robert Kuschmierz and Professor Jürgen Czarske, from the Laboratory of Measurement and Sensor System Technique, TU Dresden, Germany, has demonstrated the use of static diffractive optical elements made by 2-photon lithography to enable high-resolution endomicroscopy without programmable optics. This can enable robust and low-cost 3D endoscopes with diameters below 0.5 millimeters.

 

CFBs combine several thousand of individual fibers and enable the pixelated transfer of intensity images of object in focus. However, each single fiber exhibits an arbitrary phase distortion, which prohibits relaying depth information or imaging out of focus objects. The researchers pursued two different approaches to circumvent this problem.

 

In the first approach, they measured the phase distortion of the CFB using digital holography. They applied a diffractive optical element (DOE), made by 2-photon polymerization, to the CFB. The DOE consists of several thousand pillars of different height, with each pillar placed directly on top of an individual fiber. Varying the pillar height enabled to match the phase distortions, so that the output light field closely matched the input light field. Thus, the CFB-DOE combination becomes “truly invisible” which means it can be introduced into standard microscopes and enables 3D microscopy deep inside the human body.

 

In a second approach, they replaced the standard lens, normally found on top of fiber endoscopes by a random DOE. Such DOEs convert the light of each point in 3D space into a unique, pseudo-random intensity pattern. The image of an object consisting of multiple points results in the superposition of the corresponding patterns. Since the 3D object information is now coded in the intensity only, it can be transferred through the CFB without regard for phase distortions. However, the resulting intensity images do not resemble the object in a way recognizable to humans. Therefore, the researchers used artificial intelligence to reconstruct the 3D objects from the intensity images.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.