An important aim of research on the blind image quality assessment (IQA) problem is to devise perceptual models that can predict the quality of distorted images with as little prior knowledge of the images or their distortions as possible. Current state-of-the-art 'general purpose' no reference (NR) IQA algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores. However we have recently derived a blind IQA model that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed without any exposure to distorted images. Thus, it is 'completely blind.' The new IQA model, which we call the Natural Image Quality Evaluator (NIQE) is based on the construction of a 'quality aware' collection of statistical features based on a simple and successful space domain natural scene statistic (NSS) model. These features are derived from a corpus of natural, undistorted images. Experimental results show that the new index delivers performance comparable to top performing NR IQA models that require training on large databases of human opinions of distorted images. A software release is available at:http://live.ece.utexas.edu/research/quality/niqe release.zip. Index Terms-Completely blind, distortion free, no reference, image quality assessment
I. INTRODUCTIONAmericans captured 80 billion digital photographs in 2011 and this number is increasing annually [1]. More than 250 million photographs are being posted daily on facebook. Consumers are drowning in digital visual content and finding ways to review and control of the quality of digital photographs is becoming quite challenging.At the same time, camera manufacturers continue to provide improvements in photographic quality and resolution. The raw captured images pass through multiple post processing steps in the camera pipeline, each requiring parameter tuning. A problem of great interest is to find ways to automatically evaluate and control the perceptual quality of the visual content as a function of these multiple parameters.Objective image quality assessment refers to automatically predict the quality of distorted images as would be perceived by an average human. If a naturalistic reference image is supplied against which the quality of the distorted image can be compared, the model is called full reference (FR) [2].
We present the results of a recent large-scale subjective study of video quality on a collection of videos distorted by a variety of application-relevant processes. Methods to assess the visual quality of digital videos as perceived by human observers are becoming increasingly important, due to the large number of applications that target humans as the end users of video. Owing to the many approaches to video quality assessment (VQA) that are being developed, there is a need for a diverse independent public database of distorted videos and subjective scores that is freely available. The resulting Laboratory for Image and Video Engineering (LIVE) Video Quality Database contains 150 distorted videos (obtained from ten uncompressed reference videos of natural scenes) that were created using four different commonly encountered distortion types. Each video was assessed by 38 human subjects, and the difference mean opinion scores (DMOS) were recorded. We also evaluated the performance of several state-of-the-art, publicly available full-reference VQA algorithms on the new database. A statistical evaluation of the relative performance of these algorithms is also presented. The database has a dedicated web presence that will be maintained as long as it remains relevant and the data is available online.
We study the problem of automatic "reduced reference" image quality assessment algorithms from the point of view of image information change. Such changes are measured between the reference image and natural image approximations of the distorted image. Algorithms that measure differences between the entropies of wavelet coefficients of reference and distorted images as perceived by humans are designed. The algorithms differ in the data on which the entropy difference is calculated and on the amount of information from the reference that is required for quality computation, ranging from almost full information to almost no information from the reference. A special case of this are algorithms that require just a single number from the reference for quality assessment. The algorithms are shown to correlate very well with subjective quality scores as demonstrated on the LIVE Image Quality Assessment Database and Tampere Image Database. The performance degradation as the amount of information is reduced is also studied.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.