2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP) 2020
DOI: 10.1109/mmsp48831.2020.9287151
|View full text |Cite
|
Sign up to set email alerts
|

No-Reference Video Quality Assessment Using Space-Time Chips

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(19 citation statements)
references
References 20 publications
0
19
0
Order By: Relevance
“…These databases are very challenging, and previous VQA methods, validated on synthetically distorted video datasets, do not produce quality estimates that correlate well with ground-truth Mean Opinion Scores (MOSs). For this reason, methods have been proposed that can capture both spatial and temporal distortions [ 19 , 20 ], some of which by exploiting deep learning-based techniques [ 5 , 9 , 21 ].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These databases are very challenging, and previous VQA methods, validated on synthetically distorted video datasets, do not produce quality estimates that correlate well with ground-truth Mean Opinion Scores (MOSs). For this reason, methods have been proposed that can capture both spatial and temporal distortions [ 19 , 20 ], some of which by exploiting deep learning-based techniques [ 5 , 9 , 21 ].…”
Section: Related Workmentioning
confidence: 99%
“…Finally, the AGGD parameters serve as image features and are mapped to a quality score using a SVR. The ChipQA [ 20 ] captures both spatial and temporal distortions, by building a representation of local spatiotemporal data that is attuned to local orientations of motion but is studied over large spatial fields. The quality of a video is estimated by identifying and quantifying deviations from the expected statistics of natural, undistorted space-time chips.…”
Section: Related Workmentioning
confidence: 99%
“…For video mean subtracted contrast normalized (MSCN) coefficients and spatiotemporal Gabor bandpass filtered outputs, [40] established an asymmetric generalized Gaussian distribution (AGGD) model to perceive distortions. In the meantime, optical flow [4,41], ST-chip [42], multi-scale trajectory [43], and bitstream level features [44][45][46][47] were also used to quantify distortion in video data. Although many of these methods contribute greatly to the perception of specific distortions without reference, they are not satisfactory for evaluating the quality of in-the-wild videos with sophisticated distortion.…”
Section: Related Workmentioning
confidence: 99%
“…The parameters of asymmetric generalized Gaussian distributions were considered as quality-aware features and mapped onto perceptual quality scores with the help of an SVR. Ebenezer et al [35] proposed a video representation that contains information about both the spatial and temporal information. Namely, the authors defined space-time chips as quality-aware features, which are cuts from the original video data in specific directions obtained from local motion flow.…”
Section: Related Workmentioning
confidence: 99%