2021
DOI: 10.3390/jimaging7030055
|View full text |Cite
|
Sign up to set email alerts
|

An Efficient Method for No-Reference Video Quality Assessment

Abstract: Methods for No-Reference Video Quality Assessment (NR-VQA) of consumer-produced video content are largely investigated due to the spread of databases containing videos affected by natural distortions. In this work, we design an effective and efficient method for NR-VQA. The proposed method exploits a novel sampling module capable of selecting a predetermined number of frames from the whole video sequence on which to base the quality assessment. It encodes both the quality attributes and semantic content of vid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 40 publications
0
17
0
Order By: Relevance
“…Obviously, the above mentioned algorithms were evaluated exactly the same way as the proposed method which is described in Section 4.2 . Further, the performance results of eleven other methods, such as FC Model [ 45 ], STFC Model [ 45 ], STS-SVR [ 88 ], STS-MLP [ 88 ], ChipQA [ 89 ], QSA-VQM [ 50 ], Agarla et al [ 51 ], Jiang et al [ 90 ], MLSP-VQA-FF [ 6 ], MLSP-VQA-RN [ 6 ], and MLSP-VQA-HYB [ 6 ], were copied from the corresponding papers to give a more comprehensive comparison to the state-of-the-art. The results are summarized in Table 9 and Table 10 .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Obviously, the above mentioned algorithms were evaluated exactly the same way as the proposed method which is described in Section 4.2 . Further, the performance results of eleven other methods, such as FC Model [ 45 ], STFC Model [ 45 ], STS-SVR [ 88 ], STS-MLP [ 88 ], ChipQA [ 89 ], QSA-VQM [ 50 ], Agarla et al [ 51 ], Jiang et al [ 90 ], MLSP-VQA-FF [ 6 ], MLSP-VQA-RN [ 6 ], and MLSP-VQA-HYB [ 6 ], were copied from the corresponding papers to give a more comprehensive comparison to the state-of-the-art. The results are summarized in Table 9 and Table 10 .…”
Section: Resultsmentioning
confidence: 99%
“…Based on these quality attributes, frame-level quality scores were generated and used for perceptual video quality estimation using a recurrent neural network. In [ 51 ], the authors improved further the previously mentioned method by introducing a sampling algorithm that eliminates temporal redundancy in video sequences by choosing representative video frames.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Comparing NR-IQA to NR-VQA models, deep learning is widely used for the former task while only a few NR-VQA deep learning models have been proposed to date [43]. Moreover, improvements are often observed when combining the distortion and contentaware features for estimating NR-IQA [37], [44] and NR-VQA [73]. Thus, we take advantage of these studies and extract these two types of features in our study.…”
Section: A Feature Extractionmentioning
confidence: 99%
“…The more severe the image distortion is, the lower the quality score will be. Recently, it was revealed that deep neural network (DNN) features are distortion-sensitive [45], [46], and NR-IQA/VQA methods began to incorporate networks for predicting distortion in their model [37], [47], [73]. Additionally, it was shown that DNN layers of increasing depth learn features of growing complexity.…”
Section: ) Distortion Featuresmentioning
confidence: 99%
See 1 more Smart Citation