2021
DOI: 10.2352/issn.2694-118x.2021.lim-21
|View full text |Cite
|
Sign up to set email alerts
|

Visual Scan-Path based Data-Augmentation for CNN-based 360-degree Image Quality Assessment

Abstract: 360-degree Image quality assessment (IQA) is facing the major challenge of lack of ground-truth databases. This problem is accentuated for deep learning based approaches where the performances are as good as the available data. In this context, only two databases are used to train and validate deep learning-based IQA models. To compensate this lack, a dataaugmentation technique is investigated in this paper. We use visual scan-path to increase the learning examples from existing training data. Multiple scan-p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…Based on these observations, the parameter for both methods should be carefully chosen, as it is dependent on the variability and span of local qualities. In addition, the difference among OIQA and CVIQ is due to the nature and diversity of their content, as shown in [30]. This is also depicted by the provided curves, where an important gap between PLCC and SRCC values can be observed on CVIQ compared to OIQA independently of the used pooling methods.…”
Section: Resultsmentioning
confidence: 80%
See 1 more Smart Citation
“…Based on these observations, the parameter for both methods should be carefully chosen, as it is dependent on the variability and span of local qualities. In addition, the difference among OIQA and CVIQ is due to the nature and diversity of their content, as shown in [30]. This is also depicted by the provided curves, where an important gap between PLCC and SRCC values can be observed on CVIQ compared to OIQA independently of the used pooling methods.…”
Section: Resultsmentioning
confidence: 80%
“…By taking the content surrounding these fixations, we extract patches of 256 × 256 pixels. Selected patches are then extracted on the sphere in order to avoid any geometric distortions due to the sphere-to-plane projection [30,11]. In total, eighty patches are extracted from the 360-degree image I.…”
Section: Patch-based Modelmentioning
confidence: 99%