In this work we investigate the use of deep learning for distortion-generic blind image quality assessment. We report on different design choices, ranging from the use of features extracted from pre-trained Convolutional Neural Networks (CNNs) as a generic image description, to the use of features extracted from a CNN fine-tuned for the image quality task. Our best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple sub-regions of the original image. Experimental results on the LIVE In the Wild Image Quality Challenge Database show that DeepBIQ outperforms the state-ofthe-art methods compared, having a Linear Correlation Coefficient (LCC) with human subjective scores of almost 0.91. These results are further confirmed also on four benchmark databases of synthetically distorted images: LIVE, CSIQ, TID2008 and TID2013.
In this work we describe a Convolutional Neural Network (CNN) to accurately predict the scene illumination. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max pooling, one fully connected layer and three output nodes. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating scene illumination. This approach achieves state-of-the-art performance on a standard dataset of RAW images. Preliminary experiments on images with spatially varying illumination demonstrate the stability of the local illuminant estimation ability of our CNN.
Abstract-In this paper we present a method for the estimation of the color of the illuminant in RAW images. The method includes a Convolutional Neural Network that has been specially designed to produce multiple local estimates. A multiple illuminant detector determines whether or not the local outputs of the network must be aggregated into a single estimate. We evaluated our method on standard datasets with single and multiple illuminants, obtaining lower estimation errors with respect to those obtained by other general purpose methods in the state of the art.
In this work, we investigate how illuminant estimation techniques can be improved, taking into account automatically extracted information about the content of the images. We considered indoor/outdoor classification because the images of these classes present different content and are usually taken under different illumination conditions. We have designed different strategies for the selection and the tuning of the most appropriate algorithm (or combination of algorithms) for each class. We also considered the adoption of an uncertainty class which corresponds to the images where the indoor/outdoor classifier is not confident enough. The illuminant estimation algorithms considered here are derived from the framework recently proposed by Van de Weijer and Gevers. We present a procedure to automatically tune the algorithms' parameters. We have tested the proposed strategies on a suitable subset of the widely used Funt and Ciurea dataset. Experimental results clearly demonstrate that classification based strategies outperform general purpose algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.