2021
DOI: 10.1117/1.jei.30.5.053026
|View full text |Cite
|
Sign up to set email alerts
|

No-reference video quality assessment for user generated content based on deep network and visual perception

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Blue and black numbers in bold represent the best and second best respectively. We take numbers from (Ying et al, 2021;Jiang et al, 2021;You, 2021;Tan et al, 2021;Liao et al, 2022) for the results of the reference methods. Our final method is marked in gray.…”
Section: Results On Lsvq and Lsvq-1080pmentioning
confidence: 99%
See 1 more Smart Citation
“…Blue and black numbers in bold represent the best and second best respectively. We take numbers from (Ying et al, 2021;Jiang et al, 2021;You, 2021;Tan et al, 2021;Liao et al, 2022) for the results of the reference methods. Our final method is marked in gray.…”
Section: Results On Lsvq and Lsvq-1080pmentioning
confidence: 99%
“…It shows that exploiting both global and local information can be beneficial for VQA. Recent CNN-Transformer hybrid methods (Jiang et al, 2021;Li et al, 2021;Tan et al, 2021;You, 2021) show the benefit of using Transformer for temporal aggregation on CNN-based frame-level features. Since all these methods use CNN for spatial feature extraction, they suffer from CNN's limitation, i.e., a relatively small spatial receptive field.…”
Section: Related Workmentioning
confidence: 99%