A self-training-based spectral reflectance recovery method was developed to accurately reconstruct the spectral images of art paintings with multispectral imaging. By partitioning the multispectral images with the k-means clustering algorithm, the training samples are directly extracted from the art painting itself to restrain the deterioration of spectral estimation caused by the material inconsistency between the training samples and the art painting. Coordinate paper is used to locate the extracted training samples. The spectral reflectances of the extracted training samples are acquired indirectly with a spectroradiometer, and the circle Hough transform is adopted to detect the circle measuring area of the spectroradiometer. Through simulation and a practical experiment, the implementation of the proposed method is explained in detail, and it is verified to have better reflectance recovery performance than that using the commercial target and is comparable to the approach using a painted color target.
In this study, a novel illuminant color estimation framework is proposed for color constancy, which incorporates the high representational capacity of deep-learning-based models and the great interpretability of assumption-based models. The well-designed building block, feature map reweight unit (ReWU), helps to achieve comparative accuracy on benchmark datasets with respect to prior state-of-the-art models while requiring only 1%-5% model size and 8%-20% computational cost. In addition to local color estimation, a confidence estimation branch is also included such that the model is able to produce point estimate and its uncertainty estimate simultaneously, which provides useful clues for local estimates aggregation and multiple illumination estimation. The source code and the dataset are available at https://github.com/QiuJueqin/Reweight-CC.Keywords Color constancy, illuminant estimation, convolutional neural network, computer vision IntroductionColor constancy of the human visual system is an essential prerequisite for many vision tasks, which compensates for the effect of the illumination on objects' color perception. Many computer vision applications are designed to extract comprehensive information from the intrinsic colors of the objects, thereby requiring the input images to be color-unbiased. Unfortunately, the photosensors in modern digital cameras do not possess the ability of automatically compensating for the illuminant colors. To address this issue, a variety of computational color constancy algorithms have been proposed to mimic the dynamical adjustments of the cones in the human visual system [1,2,3].Computational color constancy generally works by first estimating the illuminant color, and then compensating it by multiplying the reciprocal of the illuminant color to the color-biased image. Existing computational color constancy algorithms can be classified into a priori assumption-based ones and learning-based ones, according to whether a training process is needed. Typical assumption-based algorithms include Gray-World [4], White-Patch [1], variants of Gray-Edge [5,6], and some that utilize statistical information of the images [7]. Although assumption-based methods are lightweight and comprehensible, their performances are likely to decrease dramatically if these restrictive assumptions are not satisfied. Learning-based algorithms can be further grouped into low-level ones and high-level ones. Typical low-level methods include Color-by-Correlation [8], Gamut Mapping [9], Bayesian color constancy [10], etc.Since the spatial and textural information has been lost when generating low-level color descriptors, these methods are prone to produce ambiguous estimates if they have not "seen" the colorimetric patterns of the test images in the training phase. In recent years, following the massive success of deep learning in computer vision community, high-level color constancy algorithms based on the convolutional neural network (CNN) have achieved state-of-the-art performances on the benchmark datasets ...
We conducted psychophysical experiments to investigate the impacting factors on the image quality of HDR displays. The result indicated that the OLED display has advantages over IPS and VA LCDs due to its lower minimum luminance level and less pixel interaction, thus being an appropriate choice to display HDR contents. Keywordshigh dynamic range (HDR); display; image quality; psychophysical experiment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.