News Release

IAACS: Image aesthetic assessment through color composition and space formation

Peer-Reviewed Publication

Beijing Zhongke Journal Publising Co. Ltd.

Example outputs from each network layer of HED and the holistic contour map generated in our method.

image: As depicted in Figure, the article examined the outputs from different side-output layers of the HED.(a) Example input image, (b)–(g) outputs from side-output layers 3 to 5 and the final fusion layer (side-output-fuse), respectively. (h) Our holistic contour map view more 

Credit: Beijing Zhongke Journal Publising Co. Ltd.

The automatic assessment of image aesthetics is challenging. This relates to the development of image assessment methods that resemble the aesthetics of the public. Specifically, there are many factors for judging the aesthetic quality of an image, such as the colors constituting the image, how well those colors go together

with each other, the content objects that constitute the image, and how they are organized in the image. The ability to extract and reason these factors is the key to assessing the image aesthetic quality. With the development of deep learning technologies, researchers have introduced deep convolutional neural networks

(CNNs) to perform image aesthetic evaluation[1]. Such methods can assist people in judging image quality, particularly those without rich image aesthetic knowledge and photography experience.

Compared to previous models, there is still a lot of room for exploring aesthetic feature extraction. Inspired by the three basic disciplines of the Bauhaus educational model––namely, plane composition, color composition, and three-dimensional composition, which are also the basic disciplines for many visual arts studies––we reason image aesthetics as color composition and space formation, propose extracting the color palette as the color position features, and extract image contour maps based on edge detection to formulate space formation features. We extracted these features using the proposed feature extraction module (FET). We\ also extracted image content features from an input image using the ImageNet-based classification network model. By fusing the extracted features, we generated a ten-category score distribution to represent the image aesthetics. The main contributions of this study are as follows.

• We are the first to use color composition, space formation, and visual features of image contents to assess image aesthetics through deep learning.

• We propose the extraction of color composition features based on the color palette and the contribution of each color component. We also propose the extraction of space formation features by analyzing the contours of the image content.

• We develop a novel multi-view learning model to extract and merge features and effectively evaluate image aesthetics.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.