News Release

Researchers teach computers to search for photos based on their contents

Peer-Reviewed Publication

Penn State

Researchers Teach Computers to Search for Photos Based on Their Contents (1 of 2)

image: ALIPR assigned the following keywords to this photo of a dinosaur exhibit at the American Museum of Natural History in New York, New York: rock, animal, landscape, man-made, people, cave, wildlife, indoor, interior, lizard, texture, design, grass, car and building. view more 

Credit: Penn State

A pair of Penn State researchers has developed a statistical approach, called Automatic Linguistic Indexing of Pictures in Real-Time (ALIPR), that one day could make it easier to search the Internet for photographs. The public can participate in improving ALIPR's accuracy by visiting a designated Web site http://www.alipr.com, uploading photographs, and evaluating whether the keywords that ALIPR uses to describe the photographs are appropriate.

ALIPR works by teaching computers to recognize the contents of photographs, such as buildings, people, or landscapes, rather than by searching for keywords in the surrounding text, as is done with most current image-retrieval systems. The team recently received a patent for an earlier version of the approach, called ALIP, and is in the process of obtaining another patent for the more sophisticated ALIPR. They hope that eventually ALIPR can be used in industry for automatic tagging or as part of Internet search engines.

"Our basic approach is to take a large number of photos -- we started with 60,000 photos -- and to manually tag them with a variety of keywords that describe their contents. For example, we might select 100 photos of national parks and tag them with the following keywords: national park, landscape, and tree," said Jia Li, an associate professor of statistics at Penn State. "We then would build a statistical model to teach the computer to recognize patterns in color and texture among these 100 photos and to assign our keywords to new photos that seem to contain national parks, landscapes, and/or trees. Eventually, we hope to reverse the process so that a person can use the keywords to search the Web for relevant images."

Li said that most current image-retrieval systems search for keywords in the text associated with the photo or in the name that was given to the photo. This technique, however, often misses appropriate photos and retrieves inappropriate photos. Li's new technique allows her to train computers to recognize the semantics of images based on pixel information alone.

Li, who developed ALIPR with her colleague James Wang, a Penn State associate professor of information sciences and technology, said that their approach appropriately assigns to photos at least one keyword among seven possible keywords about 90 percent of the time. But, she added, the accuracy rate really depends on the evaluator. "It depends on how specific the evaluator expects the approach to be," she said. "For example, ALIPR often distinguishes people from animals, but rarely distinguishes children from adults." The team is working to improve ALIPR's accuracy by constantly teaching it new keywords.

Although the team's goal is to improve ALIPR's accuracy, Li said she does not believe the approach ever will be 100-percent accurate. "There are so many images out there and so many variations on the images' contents that I don't think it will be possible for ALIPR to be 100-percent accurate," she said. "ALIPR works by recognizing patterns in color and texture. For example, if a cat in a photo is wearing a red coat, the red coat may lead ALIPR to tag the photo with words that are irrelevant to the cat. There is just too much variability out there." Li currently is pursuing some new ideas that may help her to achieve better recognition of image semantics.

###

This research is being supported by the National Science Foundation.

CONTACTS

Jia Li: (+1) 814-863-3074, jiali@psu.edu
Barbara Kennedy (PIO): 814-863-4682, science@psu.edu

IMAGES

High-resolution images related to this story are on the Web at: http://www.science.psu.edu/alert/Li10-2008.htm


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.