News Release

Disney Researchers develop method to capture stylized hair for 3-D-printed figurines

Hairstyle is a defining characteristic second only to the face

Peer-Reviewed Publication

Disney Research

Perhaps no aspect of 3D printing has captured the popular imagination more than personalized figurines with the facial features of real people. Now, researchers at Disney Research Zurich and the University of Zaragoza have developed a method that can incorporate an individual's hairstyle as well.

The researchers will present their new method at ACM SIGGRAPH 2014, the International Conference on Computer Graphics and Interactive Techniques in Vancouver, Aug. 10-14.

Miniature statues with a person's likeness are nowadays produced by scanning the individual's face with a depth camera or other sensor to create a 3D model. These facial features can then be applied to a figure that is produced on a 3D printer. But hair is beyond the capabilities of most systems, so hairstyles either must be roughly approximated or replaced with a pre-existing template.

The result can leave much to be desired, said Dr. Derek Bradley, associate research scientist at Disney Research Zurich.

"Almost as much as the face, a person's hairstyle is a defining characteristic of an individual," he explained. "The resulting figurine loses a degree of realism when the individual's hairstyle isn't adequately captured."

The goal is not to reproduce a hairstyle fiber by fiber, as this level of complexity cannot be miniaturized using current 3D printers. Rather, the researchers were inspired by artistic sculptures, such as Michelangelo's David, which reproduce the essence of a hairstyle, but in the solid form of a helmet. In the case of 3D-printed figurines, the researchers sought to retain the appearance of directional wisps and the overall flow of hair, as well as its color.

Beginning with several color images captured of the subject's head, the system first computes a coarse geometry for the surface of the hair. Color information from the images is then added, matching the colors to the rough geometry to the extent possible. In the next step, color stylization, the level of detail is reduced enough to enable the representation to be miniaturized and reproduced, while preserving the hairstyle's defining features. Finally, geometric details are added in a way that is consistent with the color stylization.

The researchers demonstrated the system by capturing the varying hairstyles of several people, including two people who each were scanned with four different hairstyles. In each reproduction, the hairstyles are identifiable and recognizably the same as when the subject's image was captured. The method even enabled facial hair and fur to be reproduced.

###

In addition to Bradley, the research team included Dr. Thabo Beeler of Disney Research Zurich and Jose I. Echevarria, a Ph.D. student who interned at the Disney lab, and Dr. Diego Gutierrez, both of the University of Zaragoza, Spain.

More information, including a video, is available on the project web site at http://www.disneyresearch.com/project/stylized-hair-capture/.

About Disney Research

Disney Research is a network of research laboratories supporting The Walt Disney Company. Its purpose is to pursue scientific and technological innovation to advance the company's broad media and entertainment efforts. Vice Presidents Jessica Hodgins and Markus Gross manage Disney Research facilities in Los Angeles, Pittsburgh, Zürich, and Boston and work closely with the Pixar and ILM research groups in the San Francisco Bay Area. Research topics include computer graphics, animation, video processing, computer vision, robotics, wireless & mobile computing, human-computer interaction, displays, behavioral economics, and machine learning.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.