News Release

Artificial intelligence technology accelerates super-resolution localization photoacoustic imaging of blood vessels

Peer-Reviewed Publication

Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Deep Learning-Based 3D Label-free Localization Optical-Resolution Photoacoustic Microscopy.

image: Frame counts of 60 and 5 are used for the dense and sparse localization-based images, respectively. Close-up views of the regions outlined by the green dashed boxes and cross-sectional B-mode images of the region highlighted by the blue dashed lines. The two adjacent blood vessels are clearly resolved in the deep learning and dense localization-based images, whereas they are not in the regular OR-PAM image in close-up MAP (ⅰ). In close-up B-mode (ii), a blood vessel highlighted by the white dashed circles, in which the sparse image has a low signal-to-noise ratio, is well-restored in the deep learning localization-based image. Even though the sparse image does not contain the vessels, they are restored in the deep learning localization-based image because our network is based on 3D convolutions, allowing for the reference of adjacent pixels in 3D space. view more 

Credit: by Jongbeom Kim, Gyuwon Kim, Lei Li,Pengfei Zhang, Jin Young Kim,Yeonggeun Kim, Hyung Ham Kim, Lihong V. Wang, Seungchul Lee, Chulhong Kim

After a lightning strike, thunder can be heard for a short period of time afterward. This is due to the fact that the surrounding material that was struck by lightning absorbs the light, and as a result of the conversion of this light into heat, the material expands and produces a sound. This imaging technique known as photoacoustic imaging (PAI), which uses this phenomenon to take photographs of the inside of the body, is being explored as a new premier medical imaging device in various preclinical and clinical applications.

 

PAI technology has been recently using the 'localization imaging' method, which involves imaging the same area numerous times in order to achieve super high spatial resolution beyond the physical limitation regardless of the imaging depth. However, this superior spatial resolution is achieved by sacrificing temporal resolution since multiple frames, each containing the localization target, must be superimposed to form a sufficiently sampled high-density super resolution image. This has made it challenging to employ for research that needs to confirm an immediate reaction.

 

In a new paper published in Light Science & Application, a team of scientists, led by Professor Chulhong Kim, Professor Seungchul Lee, Ph.D. candidate Jongbeom Kim, and M.S. Gyuwon Kim of Department of Electrical Engineering, Convergence IT Engineering, and Mechanical Engineering, from Pohang University of Science and Technology (POSTECH), South Korea, and together with the team of Professor Lihong Wang and Postdoctoral Research Associate Lei Li from California Institute of Technology (Caltech), USA, have developed an AI-based localization PAI for solving the disadvantages of the slow imaging speed. By using the deep learning to boost the imaging speed and reduce the amount of laser beams on the body, it has been able to address all three of these issues simultaneously: slow imaging speed, low spatial resolution, and burden on the body.

 

Using the deep learning technology, the research team was able to reduce the number of images used in this method by more than 10 fold and increase the imaging speed by 12 fold. The imaging times of localization photoacoustic microscopy and photoacoustic computed tomography were reduced from 30 seconds to 2.5 seconds and from 30 minutes to 2.5 minutes, respectively.

 

This advancement opens up the new possibility of the localization PAI techniques in various preclinical or clinical applications, which requires both high speed and fine spatial resolution, such as the studies of the instantaneous drug and hemodynamic responses. Above all, a major advantage of this technology is the fact that it significantly minimizes both laser beam exposure to the living body and the imaging time, which reduces the burden on patients.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.