2022
DOI: 10.1609/aaai.v36i1.19937
|View full text |Cite
|
Sign up to set email alerts
|

Perceptual Quality Assessment of Omnidirectional Images

Abstract: Omnidirectional images, also called 360◦images, have attracted extensive attention in recent years, due to the rapid development of virtual reality (VR) technologies. During omnidirectional image processing including capture, transmission, consumption, and so on, measuring the perceptual quality of omnidirectional images is highly desired, since it plays a great role in guaranteeing the immersive quality of experience (IQoE). In this paper, we conduct a comprehensive study on the perceptual quality of omnidire… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…Inspired by this observation, we define our training loss function based on the viewports of the omnidirectional image, reflecting how an omnidirectional image is viewed (Sui et al 2021;Fang et al 2022). Specifically, we first adopt rectilinear projections (Ye, Alshina, and Boyce 2017) to map the recovered HR image in ERP format back to the 3D sphere format and then sample 14 viewports uniformly distributed over the sphere for each omnidirectional image 2 , which cover all spherical content.…”
Section: Viewport-based Training Lossmentioning
confidence: 99%
“…Inspired by this observation, we define our training loss function based on the viewports of the omnidirectional image, reflecting how an omnidirectional image is viewed (Sui et al 2021;Fang et al 2022). Specifically, we first adopt rectilinear projections (Ye, Alshina, and Boyce 2017) to map the recovered HR image in ERP format back to the 3D sphere format and then sample 14 viewports uniformly distributed over the sphere for each omnidirectional image 2 , which cover all spherical content.…”
Section: Viewport-based Training Lossmentioning
confidence: 99%
“…Recently, some researchers [2], [7], [8], [30]- [42] turn attention to saliency studies in 360 • panoramic scenarios. While most datasets only provide either eye-fixation groundtruth data for saliency prediction or bounding box groundtruth for object detection, which can promote salient object detection but is not enough for accurate pixel-wise salient object segmentation in panoramic scenarios.…”
Section: A 360 • Panoramic Datasetsmentioning
confidence: 99%
“…Existing deep-learned image quality representation [18] mainly relies on the extraction and integration of features in the spatial domain. F img is usually designed as a stack of multiple layers of DNN blocks.…”
Section: Mos-based Image Quality Representationmentioning
confidence: 99%