Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity.The filters W i are learned via independent component analysis (ICA). The binarized features b i are aggregated following the same procedure described for generating the LBP labels. The descriptive features are obtained by computing the histogram of the aggregated data.Similarly to the LBP, which generates LBP channels, the BSIF generates coded images. These coded images are the set of labels generated after the binarized features are computed using Equation (17) and aggregated using Equation (1). The aggregation of BSIF results is based on a selected number of bits, instead of the number of neighbors of the labeled pixel. The labeling depends on the relationship between the patch size l and the number of binarized features b i . Figure 6 shows the BSIF coded images corresponding to the same reference using different BSIF parameters. As can be seen in this figure, the textured information depends on the patch size l and on the number of bits. The number of bits is less or equal l 2 − 1. This is the reason why the second column does not contain BSIF coded images for 9, 10, 11, or 12 bits. Figure 6 shows that the choice of the number of bits and patch sizes is important for texture analysis algorithms. Therefore, multiscale approaches that incorporate several combinations of these parameters are interesting [74][75][76][77].
The wide use of cameras by the public has raised the interest of image quality evaluation and ranking. Current cameras embed complex processing pipelines that adapt strongly to the scene content by implementing, for instance, advanced noise reduction or local adjustment on faces. However, current methods of Image Quality assessment are based on static geometric charts which are not representative of the common camera usage that targets mostly portraits. Moreover, on non-synthetic content most relevant features such as detail preservation or noisiness are often untractable.To overcome this situation, we propose to mix classical measurements and Machine learning based methods: we reproduce realistic content triggering this complex processing pipelines in controlled conditions in the lab which allows for rigorous quality assessment. Then, ML based methods can reproduce perceptual quality previously annotated. In this paper, we focus on noise quality evaluation and test on two different setups: closeup and distant portraits. These setups provide scene capture conditions flexibility, but most of all, they allow the evaluation of all quality camera ranges from high quality DSLRs to video conference devices. Our numerical results show the relevance of our solution compared to geometric charts and the importance of adapting to realistic content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.