News Release

Disney researchers improve automated recognition of human body movements in videos

New method's potential applications include video search and retrieval

Peer-Reviewed Publication

Disney Research

An algorithm developed through collaboration of Disney Research Pittsburgh and Boston University can improve the automated recognition of actions in a video, a capability that could aid in video search and retrieval, video analysis and human-computer interaction research.

The core idea behind the new method is to express each action, whether it be a pedestrian strolling down the street or a gymnast performing a somersault, as a series of space-time patterns. These begin with elementary body movements, such as a leg moving up or an arm flexing. But these movements also combine into spatial and temporal patterns - a leg moves up and then the arm flexes, or the leg moves up below the arm. Or the pattern might describe a hierarchical relationship, such as a leg moving as part of the more global movement of the body.

"Modeling actions as collections of such patterns, instead of a single pattern, allows us to account for variations in action execution," said Leonid Sigal, senior research scientist at Disney Research Pittsburgh. It thus is able to recognize a tuck or a pike movement by a diver, even if one diver's execution differs from another, or if the angle of the dive blocks a part of one diver's body, but not that of another diver's.

The method, developed by Sigal along with Shugao Ma, a Disney Research intern who is a Ph.D. student at Boston University, and his adviser, Prof. Stan Sclaroff, can detect these structures directly from a video without fine-level supervision.

"Knowing that a video contains 'walking' or 'diving,' we automatically discover which elements are most important and discriminative and with which relationships they need to be stringed together into patterns to improve recognition," Ma said.

Representing actions in this way as space-time trees, not only improves recognition of actions but also is concise and efficient, said the researchers, who will present their findings at the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, June 7-12 in Boston.

The researchers tested the method using standard video datasets including 10 different sports actions and four interactions, such as kisses and hugs depicted on TV programs. On these datasets, the Disney algorithm outperformed previously reported methods, identifying elements with a richer set of relations among them. They also found that the space-time trees discovered in one dataset to discriminate between hugs and kisses performed promisingly in discriminating hugs or kisses from other actions, such as eating and using a phone, that were not part of the original dataset.

###

This work was supported in part by the National Science Foundation.

For more information, visit the project web site at http://www.disneyresearch.com/publication/space-time-tree-ensemble-for-action-recognition.

About Disney Research

Disney Research is a network of research laboratories supporting The Walt Disney Company. Its purpose is to pursue scientific and technological innovation to advance the company's broad media and entertainment efforts. Vice Presidents Jessica Hodgins and Markus Gross manage Disney Research facilities in Los Angeles, Pittsburgh, Zürich, and Boston and work closely with the Pixar and ILM research groups in the San Francisco Bay Area. Research topics include computer graphics, animation, video processing, computer vision, robotics, wireless & mobile computing, human-computer interaction, displays, behavioral economics, and machine learning.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.