2022
DOI: 10.1016/j.neucom.2022.01.002
|View full text |Cite
|
Sign up to set email alerts
|

3D saliency guided deep quality predictor for no-reference stereoscopic images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…It is worth noting that the proposed metric scheme takes the input RGB (Red, Green, Blue) stereo image without any pre-processing and provides four output scores (e.i., left, right, stereo and global score), whereas most SIQA metrics convert the input images to a typical gray tone as a pre-treatment step for the CNN model and give a single score. As demonstrated in our recent work [17], using three channels as input rather than a single gray scale tends to increase the performance by preserving the perceived original distortion effects and judged by the human during the evaluation process.…”
Section: Proposed Multi-score Methodsmentioning
confidence: 81%
See 2 more Smart Citations
“…It is worth noting that the proposed metric scheme takes the input RGB (Red, Green, Blue) stereo image without any pre-processing and provides four output scores (e.i., left, right, stereo and global score), whereas most SIQA metrics convert the input images to a typical gray tone as a pre-treatment step for the CNN model and give a single score. As demonstrated in our recent work [17], using three channels as input rather than a single gray scale tends to increase the performance by preserving the perceived original distortion effects and judged by the human during the evaluation process.…”
Section: Proposed Multi-score Methodsmentioning
confidence: 81%
“…The diminution is similar for both datasets. Metrics Waterloo-P1/Waterloo-P2 Waterloo-P2/Waterloo-P1 Liu [12] 0.696 0.701 Yang [14] 0.781 0.864 Chen [22] 0.806 0.846 Saliency-SIQA [17] 0.826 0.848 DECOSINE [13] 0.842 0.873 Wang [15] 0.856 0.881 Proposed 0.944 0.940 0.9. Compared to the second best metric (i.e., Wang), the improvement accuracy of quality assessment was found to be 10% in terms of PLCC.…”
Section: Comparison With the State-of-the-artmentioning
confidence: 99%
See 1 more Smart Citation
“…Jiang et al [42] Proposed a unified quality evaluation model for singly and multiply distorted stereoscopic images by learning visual primitives based on a supervised dictionary framework to encode quality related features. Messai et al [43], [44], [45] created cyclopean images in the first stage, followed by predicting scores based on machine learning or convolutional neural network (CNN). Oh et al [46] built a deep CNN for blind SIQA trained through two-step regression, where the first step is responsible for automatically extracting local features, and the second part aggregates the local features into global features.…”
Section: Realated Workmentioning
confidence: 99%