2023
DOI: 10.1109/tip.2023.3290528
|View full text |Cite
|
Sign up to set email alerts
|

Subjective and Objective Audio-Visual Quality Assessment for User Generated Content

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…A similar method could be applied to video quality assessment, but we must bear in mind the computational complexity of such solutions and their suitability for real-time QoS management. Recently, some researchers focused their attention on User-Generated Content (UGC), like Yu et al [37] or Cao et al [38]. In both works, the authors mainly concentrated on constructing the UGC databases suitable for future QoE research.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A similar method could be applied to video quality assessment, but we must bear in mind the computational complexity of such solutions and their suitability for real-time QoS management. Recently, some researchers focused their attention on User-Generated Content (UGC), like Yu et al [37] or Cao et al [38]. In both works, the authors mainly concentrated on constructing the UGC databases suitable for future QoE research.…”
Section: Related Workmentioning
confidence: 99%
“…In both works, the authors mainly concentrated on constructing the UGC databases suitable for future QoE research. Yet, both groups of researchers also proposed VQA models that can be used for QoE assessment [38] or for learning qualityaware audio and visual feature representations in the temporal domain [38]. Apart from UGC videos, another specific category of videos, demanding a unique approach to quality assessment, are nighttime videos, analyzed in [39], where the authors proposed a blind nighttime video quality assessment model based on feature fusion.…”
Section: Related Workmentioning
confidence: 99%