2020
DOI: 10.1109/access.2020.3041612
|View full text |Cite
|
Sign up to set email alerts
|

Stereoscopic Video Quality Assessment Using Oriented Local Gravitational Force Statistics

Abstract: We develop a new no-reference (NR) stereoscopic video quality assessment model that adopts oriented local gravitational force (OLGF) statistics in the space-time domain. The OLGF is a novel extension of an existing local gravitational force descriptor and includes two new components: relative local gravitational force magnitude and relative local gravitational force orientation. The resulting algorithm, called Stereoscopic Video Integrity Predictor using OLGF Statistics (SVIPOS), first uses our previous work t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 50 publications
(75 reference statements)
0
5
0
Order By: Relevance
“…We repeated the training-testing process 1000 times and take the median of SROCC, PLCC and RMSE as the performance indicator of the evaluation model. We selected some classic 2D image/video quality assessment models, such as PSNR, SSIM 27 , VIS3 28 , and some 3D image/video quality assessment models, such as SINQ 29 , SJND 24 , BSVQE 4 , LBP-TOP 20 , SVIPOS 30 , as comparison algorithms. For the image quality assessment models, we evaluated the quality of each frame of the stereoscopic video pair, and took the average of quality scores of all frames as the final quality score for the stereoscopic video.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We repeated the training-testing process 1000 times and take the median of SROCC, PLCC and RMSE as the performance indicator of the evaluation model. We selected some classic 2D image/video quality assessment models, such as PSNR, SSIM 27 , VIS3 28 , and some 3D image/video quality assessment models, such as SINQ 29 , SJND 24 , BSVQE 4 , LBP-TOP 20 , SVIPOS 30 , as comparison algorithms. For the image quality assessment models, we evaluated the quality of each frame of the stereoscopic video pair, and took the average of quality scores of all frames as the final quality score for the stereoscopic video.…”
Section: Resultsmentioning
confidence: 99%
“…Existing SVQA methods often extract features from all frames of the videos [3][4][5][6] . Extracting features on keyframes can reduce redundant information and computational complexity.…”
Section: Introductionmentioning
confidence: 99%
“…Experiments have shown that the method still maintains good performance and in the case of other types of distortion [ 18 ]. Hou et al used oriented local gravitational force (OLGF) statistics to extract local gravity responses from monocular maps, product images and frame difference maps and mapped the gravity response statistics to the quality score of stereo video [ 2 ] using SVR.…”
Section: Related Workmentioning
confidence: 99%
“…At present, 3D video is mainly PGC. In the process of 3D image acquisition, professionals control the quality of video shooting professionally and strictly [ 2 ]. However, due to the limitations of hardware equipment and technical level, video has varying degrees of distortion in storage, transmission, display and other links, which makes the viewing experience of users decline.…”
Section: Introductionmentioning
confidence: 99%
“…Hou et al 85 developed an NR S3D VQA algorithm called Stereoscopic Video Integrity Predictor using OLGF Statistics (SVIPOS) based on a proposed oriented local gravitational force (OLGF) descriptor in the space-time domain, which is an extension of an existing local gravitational force descriptor with two added new components of relative local gravitational force magnitude and orientation. Considering the left and the right views of the video sequences as input parameters, the cyclopean image and product image is generated in the spatial domain to measure the correlation between these two videos, and considering only the left video sequence, a frame difference image is generated in the temporal domain.…”
Section: Nr Objective Metricsmentioning
confidence: 99%