Contact: Kristen Grauman
University of Texas at Austin
Caption: The researchers used machine learning techniques to teach their system to "score" the significance of objects in view based on egocentric factors such as how often the objects appeared in the center of the frame, which is a good proxy for where the camera wearer's gaze is, or whether they are touched by the wearer's hands.
Credit: Courtesy of Kristen Grauman.
Usage Restrictions: None
Related news release: Researchers use machine learning to boil down the stories that wearable cameras are telling