Quality of Experience (QoE) in multimedia applications is closely linked to the end users' perception and therefore its assessment requires subjective user studies in order to evaluate the degree of delight or annoyance as experienced by the users. QoE crowdtesting refers to QoE assessment using crowdsourcing, where anonymous test subjects conduct subjective tests remotely in their preferred environment. The advantages of QoE crowdtesting lie not only in the reduced time and costs for the tests, but also in a large and diverse panel of international, geographically distributed users in realistic user settings. However, conceptual and technical challenges emerge due to the remote test settings. Key issues arising from QoE crowdtesting include the reliability of user ratings, the influence of incentives, payment schemes and the unknown environmental context of the tests on the results. In order to counter these issues, strategies and methods need to be developed, included in the test design, and also implemented in the actual test campaign, while statistical methods are required to identify reliable user ratings and to ensure high data quality. This contribution therefore provides a collection of best practices addressing these issues based on our experience gained in a large set of conducted QoE crowdtesting studies. The focus of this article is in particular on the issue of reliability and we use video quality assessment as an example for the proposed best practices, showing that our recommended two-stage QoE crowdtesting design leads to more reliable results.
Abstract-Video quality assessment with subjective testing is both time consuming and expensive. An interesting new approach to traditional testing is the so-called crowdsourcing, moving the testing effort into the internet. We therefore propose in this contribution the QualityCrowd framework to effortlessly perform subjective quality assessment with crowdsourcing. QualityCrowd allows codec independent quality assessment with a simple web interface, usable with common web browsers. We compared the results from an online subjective test using this framework with the results from a test in a standardized environment. This comparison shows that QualityCrowd delivers equivalent results within the acceptable inter-lab correlation. While we only consider video quality in this contribution, QualityCrowd can also be used for multimodal quality assessment.
A no-reference video quality metric for High-Definition video is introduced. This metric evaluates a set of simple features such as blocking or blurring, and combines those features into one parameter representing visual quality. While only comparably few base feature measurements are used, additional parameters are gained by evaluating changes for these measurements over time and using additional temporal pooling methods. To take into account the different characteristics of different video sequences, the gained quality value is corrected using a low quality version of the received video. The metric is verified using data from accurate subjective tests, and special care was taken to separate data used for calibration and verification. The proposed no-reference quality metric delivers a prediction accuracy of 0.86 when compared to subjective tests, and significantly outperforms PSNR as a quality predictor.
Video quality evaluation with subjective testing is both time consuming and expensive. A promising new approach to traditional testing is the so-called crowdsourcing, moving the testing effort into the Internet. The advantages of this approach are not only the access to a larger and more diverse pool of test subjects, but also the significant reduction of the financial burden. Recent contributions have also shown that crowd-based video quality assessment can deliver results comparable to traditional testing in some cases. In general, however, new problems arise, as no longer every test detail can be controlled, resulting in less reliable results. Therefore we will discuss in this contribution the conceptual, technical, motivational and reliability challenges that need to be addressed, before this promising approach to subjective testing can become a valid alternative to the testing in standardized environments.
Abstract-This contribution presents a no-reference video quality metric, which is based on a set of simple rules that assigns a given video to one of four different content classes. The four content classes distinguish between video sequences which are coded with a very low data rate, which are sensitive to blocking effects, which are sensitive to blurring, and a general model for all other types of video sequences. The appropriate class for a given video sequence is selected based on the evaluation of feature values of an additional low quality version of the given video, which is generated by encoding. The visual quality for a video sequence is estimated using a set of features, which includes measures for the blockiness, the blurriness, the spatial activity and a set of additional continuity features. The way these features are combined to one overall quality value is determined by the feature class, to which the video has been assigned. We also propose an additional correction step for the visual quality value. The proposed metric is verified in a process, which includes visual quality values originating from subjective quality tests in combination with a cross validation approach. The presented metric significantly outperforms PSNR as a visual quality estimator. The Pearson correlation between the estimated visual quality values and the subjective test results takes on values as high as 0.82.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.