2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) 2020
DOI: 10.1109/mipr49039.2020.00015
|View full text |Cite
|
Sign up to set email alerts
|

UGC-VIDEO: Perceptual Quality Assessment of User-Generated Videos

Abstract: Recent years have witnessed an exponential increase in the demand for face video compression, and the success of artificial intelligence has expanded the boundaries beyond traditional hybrid video coding. Generative coding approaches have been identified as promising alternatives with reasonable perceptual rate-distortion trade-offs, leveraging the statistical priors of face videos. However, the great diversity of distortion types in spatial and temporal domains, ranging from the traditional hybrid coding fram… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 64 publications
0
6
0
Order By: Relevance
“…Many recent (non-gaming) UGC video quality databases [15], [38], [39], [48] were created by harvesting a large number of source videos from one or more large free public video repositories, such as the Internet Archive [49] or YFCC-100M [16]. These are typically "winnowed" to a set of videos that are representative of a category of interest, such as social media videos.…”
Section: Live-youtube Gaming Video Quality Databasementioning
confidence: 99%
“…Many recent (non-gaming) UGC video quality databases [15], [38], [39], [48] were created by harvesting a large number of source videos from one or more large free public video repositories, such as the Internet Archive [49] or YFCC-100M [16]. These are typically "winnowed" to a set of videos that are representative of a category of interest, such as social media videos.…”
Section: Live-youtube Gaming Video Quality Databasementioning
confidence: 99%
“…Depending on the usage of the pristine reference video, VQA algorithms can be classified into three types, i.e., full-reference (FR), reduced-reference (RR), and no-reference (NR). In the absence of pristine reference for the in-capture content, NR-VQA models [1][2][3][4][5][6][7][8][9][10] which rely only on the impaired videos are most appropriate [11][12][13][14].…”
Section: Introductionmentioning
confidence: 99%
“…The automatic estimation of the quality of a UGC as perceived by human observers is fundamental for a wide range of applications. For example, to discriminate professional and amateur video content on user-generated video distribution platforms [ 1 ], to choose the best sequence among many sequences for sharing in social media [ 2 ], to guide a video enhancement process [ 3 ], and to rank/choose user-generated videos [ 4 , 5 ].…”
Section: Introductionmentioning
confidence: 99%