Depth-Image-Based-Rendering (DIBR) is one of the core techniques for generating new views in 3D video applications. However, the distortion characteristics of the DIBR synthetic view are different from the 2D image. It is necessary to study the unique distortion characteristics of DIBR views and design effective and efficient algorithms to evaluate the DIBR-synthesized image and guide DIBR algorithms. In this work, the visual saliency and texture natrualness features are extracted to evaluate the quality of the DIBR views. After extracting the feature, we adopt machine learning method for mapping the extracted feature to the quality score of the DIBR views. Experiments constructed on two synthetic view databases IETR and IRCCyN/IVC, and the results show that our proposed algorithm performs better than the compared synthetic view quality evaluation methods.
PM2.5 in the atmosphere causes severe air pollution and dramatically affects the normal production and lives of residents. The real-time monitoring of PM2.5 concentrations has important practical significance for the construction of ecological civilization. The mainstream PM2.5 concentration prediction algorithms based on electrochemical sensors have some disadvantages, such as high economic cost, high labor cost, time delay, and more. To this end, we propose a simple and effective PM2.5 concentration prediction algorithm based on image perception. Specifically, the proposed method develops a natural scene statistical prior to estimating the saturation loss caused by the ’haze’ formed by PM2.5. After extracting the prior features, this paper uses the feedforward neural network to achieve the mapping function from the proposed prior features to the PM2.5 concentration values. Experiments constructed on the public Air Quality Image Dataset (AQID) show the superiority of our proposed PM2.5 concentration measurement method compared to state-of-the-art related PM2.5 concentration monitoring methods.
Due to the subjective nature of people’s aesthetic experiences with respect to images, personalized image aesthetics assessment (PIAA), which can simulate the aesthetic experiences of individual users to estimate images, has received extensive attention from researchers in the computational intelligence and computer vision communities. Existing PIAA models are usually built on prior knowledge that directly learns the generic aesthetic results of images from most people or the personalized aesthetic results of images from a large number of individuals. However, the learned prior knowledge ignores the mutual influence of the multiple attributes of images and users in their personalized aesthetic experiences. To this end, this paper proposes a personalized image aesthetics assessment method via multi-attribute interactive reasoning. Different from existing PIAA models, the multi-attribute interaction constructed from both images and users is used as more effective prior knowledge. First, we designed a generic aesthetics extraction module from the perspective of images to obtain the aesthetic score distribution and multiple objective attributes of images rated by most users. Then, we propose a multi-attribute interactive reasoning network from the perspective of users. By interacting multiple subjective attributes of users with multiple objective attributes of images, we fused the obtained multi-attribute interactive features and aesthetic score distribution to predict personalized aesthetic scores. Experimental results on multiple PIAA datasets demonstrated our method outperformed state-of-the-art PIAA methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.