Abstract-There is an analogy between single-chip color cameras and the human visual system in that these two systems acquire only one limited wavelength sensitivity band per spatial location. We have exploited this analogy, defining a model that characterizes a one-color per spatial position image as a coding into luminance and chrominance of the corresponding three-colors per spatial position image. Luminance is defined with full spatial resolution while chrominance contains sub-sampled opponent colors. Moreover, luminance and chrominance follow a particular arrangement in the Fourier domain, allowing for demosaicing by spatial frequency filtering. This model shows that visual artifacts after demosaicing are due to aliasing between luminance and chrominance and could be solved using a pre-processing filter. This approach also gives new insights for the representation of single-color pe r spatial location images and enables formal and controllable procedures to design demosaicing algorithms that perform well compared to concurrent approaches, as demonstrated by experiments.
We present a tone mapping algorithm that is derived from a model of retinal processing. Our approach has two major improvements over existing methods. First, tone mapping is applied directly on the mosaic image captured by the sensor, analogous to the human visual system that applies a nonlinearity to the chromatic responses captured by the cone mosaic. This reduces the number of necessary operations by a factor 3. Second, we introduce a variation of the center/surround class of local tone mapping algorithms, which are known to increase the local contrast of images but tend to create artifacts. Our method gives a good improvement in contrast while avoiding halos and maintaining good global appearance. Like traditional center/surround algorithms, our method uses a weighted average of surrounding pixel values. Instead of being used directly, the weighted average serves as a variable in the Naka-Rushton equation, which models the photoreceptors' nonlinearity. Our algorithm provides pleasing results on various images with different scene content and dynamic range.
We present a new algorithm that performs demosaicing and super-resolution jointly from a set of raw images sampled with a color filter array. Such a combined approach allows us to compute the alignment parameters between the images on the raw camera data before interpolation artifacts are introduced. After image registration, a high resolution color image is reconstructed at once using the full set of images. For this, we use normalized convolution, an image interpolation method from a nonuniform set of samples. Our algorithm is tested and compared to other approaches in simulations and practical experiments.
From moonlight to bright sun shine, real world visual scenes contain a very wide range of luminance; they are said to be High Dynamic Range (HDR). Our visual system is well adapted to explore and analyze such a variable visual content. It is now possible to acquire such HDR contents with digital cameras; however it is not possible to render them all on standard displays, which have only Low Dynamic Range (LDR) capabilities. This rendering usually generates bad exposure or loss of information. It is necessary to develop locally adaptive Tone Mapping Operators (TMO) to compress a HDR content to a LDR one and keep as much information as possible. The human retina is known to perform such a task to overcome the limited range of values which can be coded by neurons. The purpose of this paper is to present a TMO inspired from the retina properties. The presented biological model allows reliable dynamic range compression with natural color constancy properties. Moreover, its non-separable spatio-temporal filter enhances HDR video content processing with an added temporal constancy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.