News Release

View interpolation networks for reproducing the material appearance of specular objects

Peer-Reviewed Publication

Beijing Zhongke Journal Publising Co. Ltd.

Two methods used in this study.

image: This picture shows the following two methods with different inputs and outputs. view more 

Credit: Beijing Zhongke Journal Publising Co. Ltd.

With the spread of Internet shopping nowadays, purchasing products online has become possible and common. However, in many cases, because users are presented with few photographs of products and cannot actually hold the products, they may be unable to perceive the material of the products.

In this study, we proposed view interpolation networks for reproducing material appearances. We implemented U-Net, an existing image transformation network, and a network that used additional information in the middle layer of U-Net. The networks were trained to generate images from the intermediate viewpoints of four cameras placed at the corners of a square on a sphere. As a preliminary step for reproducing complex material appearance, we used relatively simple objects with metallic reflections to verify the effectiveness of the methods.

By comparing the methods and training data formats, we determined that single image output is better for view interpolation than image array output. We also determined that using images from random viewpoints is better than using images from fixed viewpoints as the training data format for the network.Future research should address three major problems.

First, an experiment should be conducted using objects with more complex materials. Only objects with relatively simple materials were used in our experiment. Further experiments using objects with complex optical properties such as transmission, refraction, and internal scattering, are required to reproduce a more general material appearance.

Second, the degrees of freedom of the viewpoint positions and orientations must be extended. In this study, the cameras were placed on a sphere, always facing the center of the sphere, which has two degrees of freedom. If the cameras were shifted back and forth or oriented in another direction, view interpolation with more degrees of freedom would be possible.

Finally, future research should focus on improving the accuracy. The images generated in the experiment looked almost natural; however, the output images fail to reproduce some small details. This study used a simple mean square error as the loss function. The accuracy can be improved using a loss function that is more suitable for the purpose of material appearance reproduction. In addition, the blurring may be reduced using GAN-based networks.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.