Proceedings of the 30th ACM International Conference on Multimedia 2022
DOI: 10.1145/3503161.3548064
|View full text |Cite
|
Sign up to set email alerts
|

Multiview Contrastive Learning for Completely Blind Video Quality Assessment of User Generated Content

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…Unsupervised VQA. VQA methods such as STEM (Kancharla and Channappayya 2022), VISION (Mitra and Soundararajan 2022), and NVQE (Liao et al 2022) do not require any human labelled videos in their design and give reasonable quality estimates for UGC videos. Nevertheless, their performance with respect to the methods trained with human opinion scores is under par.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Unsupervised VQA. VQA methods such as STEM (Kancharla and Channappayya 2022), VISION (Mitra and Soundararajan 2022), and NVQE (Liao et al 2022) do not require any human labelled videos in their design and give reasonable quality estimates for UGC videos. Nevertheless, their performance with respect to the methods trained with human opinion scores is under par.…”
Section: Related Workmentioning
confidence: 99%
“…To capture quality-aware representations, we choose contrastive pairs of video clips from synthetically distorted UGC videos having similar content but different levels and types of distortions. We synthetically distort UGC videos similar to VISION (Mitra and Soundararajan 2022) to model mixed camera captured and synthetic distortions from which…”
Section: Spatio-temporal Vq Representation Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Similar to FID, FVD also demonstrates a weak correlation with human visual perception. As related research, user-generated content (UGC) VQA models [57], such as SimpleVQA [58], FastVQA [59], DOVER [60], OV-PSNR [61], SSL [62], etc., have attempted to utilize action recognition network (e.g. SlowFast [20], Video Swin Transformer [63]) to represent the temporal quality feature.…”
Section: B Quality Metrics For Aigc Videosmentioning
confidence: 99%