A no-reference video quality metric for High-Definition video is introduced. This metric evaluates a set of simple features such as blocking or blurring, and combines those features into one parameter representing visual quality. While only comparably few base feature measurements are used, additional parameters are gained by evaluating changes for these measurements over time and using additional temporal pooling methods. To take into account the different characteristics of different video sequences, the gained quality value is corrected using a low quality version of the received video. The metric is verified using data from accurate subjective tests, and special care was taken to separate data used for calibration and verification. The proposed no-reference quality metric delivers a prediction accuracy of 0.86 when compared to subjective tests, and significantly outperforms PSNR as a quality predictor.
Abstract-This contribution presents a no-reference video quality metric, which is based on a set of simple rules that assigns a given video to one of four different content classes. The four content classes distinguish between video sequences which are coded with a very low data rate, which are sensitive to blocking effects, which are sensitive to blurring, and a general model for all other types of video sequences. The appropriate class for a given video sequence is selected based on the evaluation of feature values of an additional low quality version of the given video, which is generated by encoding. The visual quality for a video sequence is estimated using a set of features, which includes measures for the blockiness, the blurriness, the spatial activity and a set of additional continuity features. The way these features are combined to one overall quality value is determined by the feature class, to which the video has been assigned. We also propose an additional correction step for the visual quality value. The proposed metric is verified in a process, which includes visual quality values originating from subjective quality tests in combination with a cross validation approach. The presented metric significantly outperforms PSNR as a visual quality estimator. The Pearson correlation between the estimated visual quality values and the subjective test results takes on values as high as 0.82.
This contribution presents results of the MPEG verification test that was carried out for the new Scalable Video Coding (SVC) Amendment of H.264/AVC. The test consisted of a series of subjective comparisons of SVC and single layer H.264/AVC coding for different application scenarios including conversational applications, broadcasting over mobile channels, and HD broadcasting. The results show that a reasonable degree of spatial and quality scalability can be supported with a bit rate overhead of less than or about 10% and an indistinguishable visual quality compared to the state of the art single layer coding. This paper describes the coding conditions, the test procedure, and presents the results of the SVC verification test.
To improve the prediction accuracy of visual quality metrics for video we propose two simple steps: temporal pooling in order to gain a set of parameters from one measured feature and a correction step using videos of known visual quality. We demonstrate this approach on the well known PSNR. Firstly, we achieve a more accurate quality prediction by replacing the mean luma PSNR by alternative PSNR-based parameters. Secondly, we exploit the almost linear relationship between the output of a quality metric and the subjectively perceived visual quality for individual video sequences. We do this by estimating the parameters of this linear relationship with the help of additionally generated videos of known visual quality. Moreover, we show that this is also true for very different coding technologies. Also we used cross validation to verify our results. Combining these two steps, we achieve for a set of four different high definition videos an increase of the Pearson correlation coefficient from 0.69 to 0.88 for PSNR, outperforming other, more sophisticated fullreference video quality metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.