Omnidirectional images, also called 360◦images, have attracted extensive attention in recent years, due to the rapid development of virtual reality (VR) technologies. During omnidirectional image processing including capture, transmission, consumption, and so on, measuring the perceptual quality of omnidirectional images is highly desired, since it plays a great role in guaranteeing the immersive quality of experience (IQoE). In this paper, we conduct a comprehensive study on the perceptual quality of omnidirectional images from both subjective and objective perspectives. Specifically, we construct the largest so far subjective omnidirectional image quality database, where we consider several key influential elements, i.e., realistic non-uniform distortion, viewing condition, and viewing behavior, from the user view. In addition to subjective quality scores, we also record head and eye movement data. Besides, we make the first attempt by using the proposed database to train a convolutional neural network (CNN) for blind omnidirectional image quality assessment. To be consistent with the human viewing behavior in the VR device, we extract viewports from each omnidirectional image and incorporate the user viewing conditions naturally in the proposed model. The proposed model is composed of two parts, including a multi-scale CNN-based feature extraction module and a perceptual quality prediction module. The feature extraction module is used to incorporate the multi-scale features, and the perceptual quality prediction module is designed to regress them to perceived quality scores. The experimental results on our database verify that the proposed model achieves the competing performance compared with the state-of-the-art methods.
Most existing no-referenced image quality assessment (NR-IQA) algorithms need to extract features first and then predict image quality. However, only a small number of features work in the model, and the rest will degrade the model performance. Consequently, an NR-IQA framework based on feature optimization is proposed to solve this problem and apply to the SR-IQA field. In this study, we designed a feature engineering method to solve this problem. Specifically, the features associate with the SR images were first collected and aggregated. Furthermore, several advanced feature selection algorithms were used to sort the feature sets according to their importance, and the importance matrix of features is obtained. Then, we examined the linear relationship between the number of features and Pearson linear correlation coefficient (PLCC) to determine the optimal number of features and the optimal feature selection algorithm, so as to obtain the optimal model. The results showed that the image quality scores predicted by the optimal model are in good agreement with the human subjective scores. Adopting the proposed feature optimization framework, we can effectively reduce the number of features in the model and obtain better performance. The experimental results indicated that SR image quality can be accurately predicted using only a small part of image features. In summary, we proposed a feature optimization framework to solve the current problem of irrelevant features in SR-IQA, and an SR image quality assessment model was proposed consequently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.