2012 Fourth International Workshop on Quality of Multimedia Experience 2012
DOI: 10.1109/qomex.2012.6263865
|View full text |Cite
|
Sign up to set email alerts
|

The TUM high definition video datasets

Abstract: Video quality evaluation with subjective testing is both time consuming and expensive. A promising new approach to traditional testing is the so-called crowdsourcing, moving the testing effort into the Internet. The advantages of this approach are not only the access to a larger and more diverse pool of test subjects, but also the significant reduction of the financial burden. Recent contributions have also shown that crowd-based video quality assessment can deliver results comparable to traditional testing in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 40 publications
(28 citation statements)
references
References 10 publications
0
28
0
Order By: Relevance
“…Thus we have in total 20 data points, each representing a combination of coding condition and content for the TUM1080p50 data set. The data set is available at [317] and for more information I refer to [136,266].…”
Section: Tum1080p50mentioning
confidence: 99%
See 1 more Smart Citation
“…Thus we have in total 20 data points, each representing a combination of coding condition and content for the TUM1080p50 data set. The data set is available at [317] and for more information I refer to [136,266].…”
Section: Tum1080p50mentioning
confidence: 99%
“…In the end, however, only the data points relating to the H.264/AVC encoded video sequences can be used and thus only a subset of 32 data points will be used in the performance comparison in this thesis. The data set is available at [317] and for more information I refer to [136,139]. …”
Section: Tum1080p25mentioning
confidence: 99%
“…We gratefully acknowledge the work of TU München and their high-definition video dataset [7] as well as the JIKU video dataset [8] which provided different datasets of high-quality video. In addition we have recorded 349 own video sequences during a musical festival and a football match.…”
Section: Evaluated Videosmentioning
confidence: 99%
“…The JIKU dataset allows us to have a valid and realistic set of recordings of a live event that show quite well realistic degradations, such as shakes or occlusions. But as our as well as the JIKU dataset do not show all degradations and according parameters in the required fine granularity, we decided to artificially impair videos from the TU München dataset [7]. The traces of potential shaking or occurrence of occlusions were retrieved from the JIKU dataset [8] and our own videos using video analysis and applied them to the TU München dataset using frame manipulation methods.…”
Section: Evaluated Videosmentioning
confidence: 99%
“…All in all, we have 20 different sequences with corresponding subjective visual quality as mean opinion scores (MOS) based on a discrete voting scale from 0 to 10. For more information we refer to [11] and the results of this dataset are also discussed in detail in [12]. …”
Section: Data Setmentioning
confidence: 99%