News Release

Compact hyperspectral imaging at low cost

Novel method enables compact, single-shot hyperspectral imaging using just a prism

Peer-Reviewed Publication

Association for Computing Machinery

Hyperspectral Imaging

image: A closeup inset that the researchers capture and use as an input image for running their algorithm. view more 

Credit: ACM SIGGRAPH ASIA

BANGKOK, Thailand, - With hyperspectral imaging, photographers can obtain super fine detailed images, capturing the spectrum for each pixel in an image of a scene. This technology has wide reach and is being applied in fields such as military combat, astronomy, agriculture, biomedical imaging and geoscience. Scientists, for instance, rely on hyperspectral imaging to observe and analyze materials for mining and geology, or for various applications in the medical field. However, hyperspectral imaging systems are expensive -- ranging from $25,000 to $100,000 -- and require complex specialized hardware to operate.

A team of computer scientists from KAIST, South Korea, and Universidad de Zaragoza, Spain, has devised a way for low-cost accurate hyperspectral imaging, ridding of expensive equipment and complex coding. This novel, compact single-shot hyperspectral imaging method captures images using a conventional DSLR camera equipped with just an ordinary refractive prism placed in front of the lens. The new, user-friendly method was tested on a variety of natural scenes, and the results, according to the researchers, compared well with current state-of-the-art hyperspectral imaging systems, achieving quality images without compromising accuracy.

The team will present their new method at SIGGRAPH Asia 2017 in Bangkok, 27 November to 30 November. This annual conference and exhibition showcases the world's leading professionals, academics and creative minds at the forefront of computer graphics and interactive techniques.

"These hyperspectral imaging systems are generally built for specific purposes such as aerial remote sensing, or military applications, and as such they are not affordable nor practical for ordinary users," said Min H. Kim, associate professor of computer science at KAIST and a lead author of the study. "Our system requires no advanced skills, and we are able to obtain hyperspectral images at virtually full resolution while making hyperspectral imaging practical."

Kim's collaborators include Diego Gutierrez, associate professor at Universidad de Zaragoza; Seung-Hwan Baek, computer science PhD student at KAIST; and Incheol Kim, researcher at KAIST in Min H. Kim's lab.

A hyperspectral image can be described as a three-dimensional cube. The imaging technique involves capturing the spectrum for each pixel in an image; as a result, the digital images produce detailed characterizations of the scene or object.

Since the researchers' new setup operates without the typical coded aperture mask and professional setup with large optical components, available spectral cues are limited. To this end, the researchers developed an image formulation model that predicts the perspective projection of dispersion (splitting light into a spectrum), yielding the dispersion direction and magnitude of each wavelength at every pixel. Their technique also comprises a novel calibration method to estimate the spatially varying dispersion of the prism. It enables users to capture spectral information without requiring a large system setup with various optical components.

Lastly, their reconstruction algorithm estimates the full spectral information of a scene from sparse information, addressing edge restoration of the scene being captured, gradient estimation and the spectral resolution of the image.

In the study, the researchers compare the predictions of their dispersion model with those of professional optics simulation software. They place a prism in front of a 50 mm lens of a digital camera, and capture a point at a distance of 700 mm. Dispersion at every pixel is accurately predicted by their method, producing comparable results to professional physical simulation of light transport.

In future work, the team plans to address the system's current sensitivity to noise as well as performance limitations due to lighting and surfaces without edges of a scene or object.

###

About SIGGRAPH Asia 2017

The 10th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia will take place in Bangkok, Thailand at the at the Bangkok International Trade and Exhibition Centre (BITEC) from 27 - 30 November 2017. The annual event held in Asia attracts the most respected technical and creative people from all over the world who are excited by research, science, art, animation, gaming, interactivity, education and emerging technologies. The four-day SIGGRAPH Asia 2017 conference includes a diverse range of juried programs, such as the Art Gallery, Computer Animation Festival, Courses, Emerging Technologies, Posters, Symposium on Education, Symposium on Mobile Graphics and Interactive Applications, Symposium on Visualization, Technical Briefs, Technical Papers, VR Showcase and Workshops. A three-day exhibition held from 28 - 30 November 2017 will offer a business platform for industry players to market their innovative products and services to the computer graphics and interactive techniques professionals and enthusiasts from Asia and beyond.

About ACM SIGGRAPH

The Association of Computing Machinery's Special Interest Group on Computer Graphics and Interactive Techniques (ACM SIGGRAPH) sponsors SIGGRAPH Asia 2017. Founded in 1947, ACM is an educational and scientific society uniting the world's computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field's challenges. ACM strengthens the profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. The ACM SIGGRAPH deals with all aspects of graphical user/computer communication and manipulation: hardware, languages, data structure, methodology, and applications


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.