Abstract-Omnidirectional visual content is a form of representing graphical and cinematic media content which provides subjects with the ability to freely change their direction of view. Along with virtual reality, omnidirectional imaging is becoming a very important type of the modern media content. This brings new challenges to the omnidirectional visual content processing, especially in the field of compression and quality evaluation. More specifically, the ability to assess quality of omnidirectional images in reliable manner is a crucial step to provide a rich quality of immersive experience. In this paper we introduce a testbed suitable for subjective evaluations of omnidirectional visual contents. We also show the results of a conducted pilot experiment to illustrate the applicability of the proposed testbed.
Abstract-Omnidirectional image and video have gained popularity thanks to availability of capture and display devices for this type of content. Recent studies have assessed performance of objective metrics in predicting visual quality of omnidirectional content. These metrics, however, have not been rigorously validated by comparing their prediction results with groundtruth subjective scores. In this paper, we present a set of 360-degree images along with their subjective quality ratings. The set is composed of four contents represented in two geometric projections and compressed with three different codecs at four different bitrates. A range of objective quality metrics for each stimulus is then computed and compared to subjective scores. Statistical analysis is performed in order to assess performance of each objective quality metric in predicting subjective visual quality as perceived by human observers. Results show the estimated performance of the state-of-the-art objective metrics for omnidirectional visual content. Objective metrics specifically designed for 360-degree content do not outperform conventional methods designed for 2D images.
Automatic prediction of salient regions in images is a well developed topic in the field of computer vision. Yet, virtual reality omnidirectional visual content brings new challenges to this topic, due to a different representation of visual information and additional degrees of freedom available to viewers. Having a model for visual attention is important to continue research in this direction. In this paper we develop such a model for head direction trajectories. The method consists of three basic steps: First, a computed head angular speed is used to exclude the parts of a trajectory where motion is too fast to fixate viewer's attention. Second, fixation locations of different subjects are fused together, optionally preceded by a re-sampling step to conform to the equal distribution of points on a sphere. Finally, a Gaussian based filtering is performed to produce continuous fixation maps. The developed model can be used to obtain ground truth experimental data when eye tracking is not available.
Lossy image compression is a popular, simple and effective solution to reduce the amount of data representing digital pictures. In most lossy compression methods, the reduced volume of data in bits is achieved at the expense of introducing visual artifacts in the picture. The perceptual quality impact of such artifacts can be assessed with expensive and timeconsuming subjective image quality experiments or through objective image quality metrics. However, the faster and less resource demanding objective quality metrics are not always able to reliably predict the quality as perceived by human observers. In this paper, the performance of 14 objective image quality metrics is benchmarked against a dataset of compressed images labeled with their subjective quality scores. Moreover, the performance of the above objective quality metrics in predicting the subjective quality of images distorted by both conventional and learning-based lossy compression artifacts is assessed and conclusions are drawn.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.