Color transformation is the most effective method to improve the mood of an image, because color has a large influence in forming the mood. However, conventional color transformation tools have a tradeoff between the quality of the resultant image and the amount of manual operation. To achieve a more detailed and natural result with less labor, we previously suggested a method that performs an example-based color stylization of images using perceptual color categories. In this paper, we extend this method to make the algorithm more robust and to stylize the colors of video frame sequences. We present a variety of results, arguing that these images and videos convey a different, but coherent mood.
We describe a new computational approach to stylize the colors of an image by using a reference image. During processing, we take the characteristics of human color perception into account to generate more appealing results. Our system starts by classifying each pixel value into one of the basic color categories, derived from our psychophysical experiments. The basic color categories are perceptual categories that are universal to everyone, regardless of nationality or cultural background. These categories are used to provide restrictions on color transformations to avoid generating unnatural results. Our system then renders a new image by transferring colors from a reference image to the input image, based on these categorizations. To avoid artifacts due to the explicit clustering, our system defines fuzzy categorization when pseudocontours appear in the resulting image. We present a variety of results and show that our method performs a large, yet natural, color transformation without any sense of incongruity and that the resulting images automatically capture the characteristics of the colors used in the reference image.
We report on the IR sensitivity enhancement of back-illuminated CMOS Image Sensor (BI-CIS) with 2-dimensional diffractive inverted pyramid array structure (IPA) on crystalline silicon (c-Si) and deep trench isolation (DTI). FDTD simulations of semi-infinite thick c-Si having 2D IPAs on its surface whose pitches over 400 nm shows more than 30% improvement of light absorption at λ = 850 nm and the maximum enhancement of 43% with the 540 nm pitch at the wavelength is confirmed. A prototype BI-CIS sample with pixel size of 1.2 μm square containing 400 nm pitch IPAs shows 80% sensitivity enhancement at λ = 850 nm compared to the reference sample with flat surface. This is due to diffraction with the IPA and total reflection at the pixel boundary. The NIR images taken by the demo camera equip with a C-mount lens show 75% sensitivity enhancement in the λ = 700–1200 nm wavelength range with negligible spatial resolution degradation. Light trapping CIS pixel technology promises to improve NIR sensitivity and appears to be applicable to many different image sensor applications including security camera, personal authentication, and range finding Time-of-Flight camera with IR illuminations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.