In this study, a novel illuminant color estimation framework is proposed for color constancy, which incorporates the high representational capacity of deep-learning-based models and the great interpretability of assumption-based models. The well-designed building block, feature map reweight unit (ReWU), helps to achieve comparative accuracy on benchmark datasets with respect to prior state-of-the-art models while requiring only 1%-5% model size and 8%-20% computational cost. In addition to local color estimation, a confidence estimation branch is also included such that the model is able to produce point estimate and its uncertainty estimate simultaneously, which provides useful clues for local estimates aggregation and multiple illumination estimation. The source code and the dataset are available at https://github.com/QiuJueqin/Reweight-CC.Keywords Color constancy, illuminant estimation, convolutional neural network, computer vision IntroductionColor constancy of the human visual system is an essential prerequisite for many vision tasks, which compensates for the effect of the illumination on objects' color perception. Many computer vision applications are designed to extract comprehensive information from the intrinsic colors of the objects, thereby requiring the input images to be color-unbiased. Unfortunately, the photosensors in modern digital cameras do not possess the ability of automatically compensating for the illuminant colors. To address this issue, a variety of computational color constancy algorithms have been proposed to mimic the dynamical adjustments of the cones in the human visual system [1,2,3].Computational color constancy generally works by first estimating the illuminant color, and then compensating it by multiplying the reciprocal of the illuminant color to the color-biased image. Existing computational color constancy algorithms can be classified into a priori assumption-based ones and learning-based ones, according to whether a training process is needed. Typical assumption-based algorithms include Gray-World [4], White-Patch [1], variants of Gray-Edge [5,6], and some that utilize statistical information of the images [7]. Although assumption-based methods are lightweight and comprehensible, their performances are likely to decrease dramatically if these restrictive assumptions are not satisfied. Learning-based algorithms can be further grouped into low-level ones and high-level ones. Typical low-level methods include Color-by-Correlation [8], Gamut Mapping [9], Bayesian color constancy [10], etc.Since the spatial and textural information has been lost when generating low-level color descriptors, these methods are prone to produce ambiguous estimates if they have not "seen" the colorimetric patterns of the test images in the training phase. In recent years, following the massive success of deep learning in computer vision community, high-level color constancy algorithms based on the convolutional neural network (CNN) have achieved state-of-the-art performances on the benchmark datasets ...
We conducted psychophysical experiments to investigate the impacting factors on the image quality of HDR displays. The result indicated that the OLED display has advantages over IPS and VA LCDs due to its lower minimum luminance level and less pixel interaction, thus being an appropriate choice to display HDR contents. Keywordshigh dynamic range (HDR); display; image quality; psychophysical experiment.
Metamer mismatching is a phenomenon where two objects that are colorimetrically indistinguishable under one lighting condition become distinguishable under another one. Due to the unavailability of spectral information, metamer mismatching introduces an inherent uncertainty into cameras' color reproduction. To investigate the degree of image quality degradation by the metamer mismatching, a large spectral reflectance database was compiled in this study to search the object-color metamers sets of the spectra in hyperspectral images. Then, metamer-degraded images were constructed and compared with the ground truth images by directional statistics-based color similarity index image quality assessment metrics to evaluate the perceptual image degradation. The results indicate that the object-color metamer mismatching has only little impact on the image quality degradation, whereas the inappropriate selection of color correction matrices involved with the illumination metamerism is the primary factor for the accuracy decrease in the digital camera color reproduction.
In this paper, a camera response formation model is proposed to accurately predict the responses of images captured under various exposure settings. Differing from earlier works that estimated the camera relative spectral sensitivity, our model constructs the physical spectral sensitivity curves and device-dependent parameters that convert the absolute spectral radiances of target surfaces to the camera readout responses. With this model, the camera responses to miscellaneous combinations of surfaces and illuminants could be accurately predicted. Thus, creating an "imaging simulator" by using the colorimetric and photometric research based on the cameras would be of great convenience.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.