News Release

New image sensor will show what the eyes see, and a camera cannot

Software behind the technology already finding its way into photo editing

Peer-Reviewed Publication

U.S. National Science Foundation

ARLINGTON, Va.-- Researchers are developing new technologies that may give robots the visual-sensing edge they need to monitor dimly lit airports, pilot vehicles in extreme weather and direct unmanned combat vehicles.

The researchers intend to create an imaging chip that defeats the harmful effects of arbitrary illumination, allowing robotic vision to leave the controlled lighting of a laboratory and enter the erratic lighting of the natural world. In a first step, the researchers have now developed software that simulates the chip circuitry, a program that alone is capable of uncovering hidden detail in existing images.

Designed by robot-vision expert, Vladimir Brajovic, and his colleagues at Intrigue Technologies, Inc.--a spin-off of the team's Carnegie Mellon University research--the new optical device will work more like a retina than a standard imaging sensor.

Just as neurons in the eye process information before sending signals to the brain, the pixels of the new device will "talk" to each other about what they see. The pixels will use the information to modify their behavior and adapt to lighting, ultimately gathering visual information even under adverse conditions.

Through an online demonstration, the simulator software plug-in, dubbed Shadow Illuminator , has processed more than 80,000 pictures from around the world. By balancing exposure across images, clearing away "noise" and improving contrast, the software revealed missing textures, exposed concealed individuals and even uncovered obscured features in medical x-ray film.

This new approach counters a persistent problem for computer-vision cameras – when capturing naturally lit scenes, a camera can be as much of an obstacle as it is a tool. Despite careful attention to shutter speeds and other settings, the brightly illuminated parts of the image are often washed out, and shadowy parts of the image are completely black.

The mathematical churning behind that process will allow pixels to "perceive" reflectance--a surface property that determines how much incoming light reflects off an object, light that a camera can capture.

Light illuminating an object helps reveal reflectance to a camera or an eye. However, illumination is a necessary evil, says Brajovic.

"Most of the problems in robotic imaging can be traced back to having too much light in some parts of the image and too little in others," he says, "and yet we need light to reveal the objects in a field of view."

To produce images that appear uniformly illuminated, the researchers created a system that widens the range of light intensities a sensor can accommodate.

According to Brajovic, limitations in standard imaging sensors have hindered many vision applications, such as security and surveillance, intelligent transportation systems, and defense systems – not to mention ruining a few cherished family photos.

The researchers hope the new technology will yield high-quality image data, despite natural lighting, and ultimately improve the reliability of machine-vision systems, such as those for biometric identification, enhanced X-ray diagnostics and space exploration imagers.

Additional comments from the researcher:

"The washed out and underexposed images captured by today's digital cameras are simply too confusing for machines to interpret, ultimately leading to failure of their vision systems in many critical applications." – Vladimir Brajovic, Carnegie Mellon University and Intrigue Technologies, Inc.

"Often, when we take a picture with a digital or film camera, we are disappointed that many details we remember seeing appear in the image buried in deep shadows or washed out in overexposed regions. This is because our eyes have a built-in mechanism to adapt to local illumination conditions, while our cameras don't. Because of this camera deficiency, robot vision often fails." – Vladimir Brajovic

###

Media contacts: Josh Chamot, NSF 703-292-7730 jchamot@nsf.gov
Anne Watzman, Carnegie Mellon University 412-268-3830 aw16@andrew.cmu.edu

Program contacts: Murali Nair, NSF 703-292-7059 mnair@nsf.gov
Winslow Sargeant, NSF 703-292-7313 wsargean@nsf.gov
Junku Yuh, NSF 703-292-811 jyuh@nsf.gov

Vladimir Brajovic's image sensor research has been supported by two NSF awards:

SBIR Phase I: Reflectance Sensitive Image Sensor for Illumination-Invariant Visual Perception
http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0339971

ITR: Sensory Level Computation and Information Encoding for Robust Imaging
http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0082364

Notes on images:
Vladimir Brajovic and his collaborators at Intrigue Technologies are developing an image sensor that will approach the adaptive capabilities of the human eye. The chip in this photo is a product of the team's related research at Carnegie Mellon. Like the proposed chip, it is a computational image sensor that pre-processes an image before sending it to a computer, video screen or other outlet.
Credit: Vladimir Brajovic, Carnegie Mellon University and Intrigue Technologies

Road departure warning systems are hampered by conventional cameras. Shadow Illuminator will help the image analysis components of these systems by extracting details from shadows. This is the original, underexposed image of a desert road.
Credit: Timothy E. Nelson

After the new software processed the image, details in the road and surrounding rock became visible.
Credit: Timothy E. Nelson

When applied to x-ray images, Shadow Illuminator enhances contrast and reveals new detail. This is the unprocessed image of a chest x-ray film.
Credit: Nikola Zivaljevic, M.D.

The software reveals additional detail in the x-ray.
Credit: Nikola Zivaljevic, M.D.

The new software may help airport security systems "see" objects in shadows. Here, a combination of dim artificial lights and natural light pouring in from windows, creates numerous obstacles for image sensors.
Credit: Vladimir Brajovic

After Shadow Illuminator processing, an area once dominated by shadow now reveals the image of a man in the bottom left corner of the scene.
Credit: Vladimir Brajovic, Carnegie Mellon University and Intrigue Technology

For more information see:
Additional information on Shadow Illuminator: http://www.intriguetek.com/
Carnegie Mellon press release: http://news.cs.cmu.edu/Releases/demo/132.html
Additional Carnegie Mellon press release: http://www-2.cs.cmu.edu/afs/cs/usr/brajovic/www/labweb/in_the_news.htm

Principal Investigator: Vladimir Brajovic, Carnegie Mellon University, Intrigue Technologies 412-855-8780 brajovic@intriguetek.com

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering, with an annual budget of nearly $5.47 billion. NSF funds reach all 50 states through grants to nearly 2,000 universities and institutions. Each year, NSF receives about 40,000 competitive requests for funding, and makes about 11,000 new funding awards. The NSF also awards over $200 million in professional and service contracts yearly.

Receive official NSF news electronically through the e-mail delivery and notification system, Custom News Service. To subscribe, enter the NSF Home Page at: http://www.nsf.gov/home/cns/#new and fill in the information under "new users."

Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
News Highlights: http://www.nsf.gov/od/lpa
Newsroom: http://www.nsf.gov/od/lpa/news/media/start.htm
Science Statistics: http://www.nsf.gov/sbe/srs/stats.htm
Awards Searches: http://www.fastlane.nsf.gov/a6/A6Start.htm


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.