News Release

Lightning fast algorithms can lighten the load of 3D hologram generation

Faster calculation method for next-gen augmented reality devices

Peer-Reviewed Publication

Tokyo Metropolitan University

Making a Hologram

video: (left) Different images at depths (a) and (b) (see right) show how the distribution of light over space forms a truly 3D image. (right) Schematic of holography setup. The calculated hologram is displayed on a spatial light modulator while laser light is directed to reflect off its surface, interfere with the original beam and form a 3D image at the camera. view more 

Credit: Tokyo Metropolitan University

Tokyo, Japan - Researchers from Tokyo Metropolitan University have developed a new way of calculating simple holograms for heads-up displays (HUDs) and near-eye displays (NEDs). The method is up to 56 times faster than conventional algorithms and does not require power-hungry graphics processing units (GPUs), running on normal computing cores like those found in PCs. This opens the way to developing compact, power-efficient, next-gen augmented reality devices, including 3D navigation on car windshields and eyewear.

The term hologram may still have a sci-fi ring to it, but holography, the science of making records of light in 3D, is used everywhere, from microscopy, fraud prevention on banknotes to state-of-the-art data storage. Everywhere, that is, except for its most obvious offering: truly 3D displays. The deployment of truly 3D displays that don't need special glasses is yet to become widespread. Recent advances have seen virtual reality (VR) technologies make their way into the market, but the vast majority rely on optical tricks that convince the human eye to see things in 3D. This is not always feasible and limits its scope.

One of the reasons for this is that generating the hologram of arbitrary 3D objects is a computationally heavy exercise. This makes every calculation slow and power-hungry, a serious limitation when you want to display large 3D images that change in real-time. The vast majority require specialized hardware like graphics processing units (GPUs), the energy-guzzling chips that power modern gaming. This severely limits where 3D displays can be deployed.

Thus, a team led by Assistant Professor Takashi Nishitsuji looked at how holograms were calculated. They realized that not all applications needed a full rendering of 3D polygons. By solely focusing on drawing the edge around 3D objects, they succeeded in significantly reducing the computational load of hologram calculations. In particular, they could avoid using Fast-Fourier Transforms (FFTs), the intensive math routines powering holograms for full polygons. The team combined simulation data with real experiments by displaying their holograms on a spatial light modulator (SLM) and illuminating them with laser light to produce a real 3D image. At high resolution, they found that their method could calculate holograms up to 56 times faster, and that the images compared favorably to those made using slower, conventional methods. Importantly, the team only used a normal PC computing core with no standalone graphics processing unit, making the whole process significantly less resource hungry.

Faster calculations on simpler cores means lighter, more compact, power-efficient devices that can be used in a wider range of settings. The team have their sights set on heads-up displays (HUDs) on car windshields for navigation, and even augmented reality eyewear to relay instructions on hands-on technical procedures, both exciting prospects for the not too distant future.

###

This work was supported by the Kenjiro Takayanagi Foundation, the Inoue Foundation for Science and the Japan Society for the Promotion of Science (19H01097, 19K21536, 20K19810).


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.