It is commonly believed that having more white pixels in a color filter array (CFA) will help the demosaicing performance for images collected in low lighting conditions. However, to the best of our knowledge, a systematic study to demonstrate the above statement does not exist. We present a comparative study to systematically and thoroughly evaluate the performance of demosaicing for low lighting images using two CFAs: the standard Bayer pattern (aka CFA 1.0) and the Kodak CFA 2.0 (RGBW pattern with 50% white pixels). Using the clean Kodak dataset containing 12 images, we first emulated low lighting images by injecting Poisson noise at two signal-to-noise (SNR) levels: 10 dBs and 20 dBs. We then created CFA 1.0 and CFA 2.0 images for the noisy images. After that, we applied more than 15 conventional and deep learning based demosaicing algorithms to demosaic the CFA patterns. Using both objectives with five performance metrics and subjective visualization, we observe that having more white pixels indeed helps the demosaicing performance in low lighting conditions. This thorough comparative study is our first contribution. With denoising, we observed that the demosaicing performance of both CFAs has been improved by several dBs. This can be considered as our second contribution. Moreover, we noticed that denoising before demosaicing is more effective than denoising after demosaicing. Answering the question of where denoising should be applied is our third contribution. We also noticed that denoising plays a slightly more important role in 10 dBs signal-to-noise ratio (SNR) as compared to 20 dBs SNR. Some discussions on the following phenomena are also included: (1) why CFA 2.0 performed better than CFA 1.0; (2) why denoising was more effective before demosaicing than after demosaicing; and (3) why denoising helped more at low SNRs than at high SNRs.The baseline approach refers to a simple upsampling of the reduced resolution color image shown in Figure 2. The standard approach for CFA 2.0 is shown in Figure 2, which illustrates how to combine the interpolated luminance image with the reduced resolution color image to generate a full resolution color image. Electronics 2019, 8, 1444 4 of 56 combine the interpolated luminance image with the reduced resolution color image to generate a full resolution color image.Figure 2. Standard approach to demosaicing CFA 2.0 images. Image from [38].In the paper [16] written by us, we proposed a pansharpening approach to demosaicing CFA 2.0. This approach is illustrated in Figure 3. The missing pixels in the panchromatic band are interpolated. At the same time, the reduced resolution CFA is demosaiced. We then apply pansharpening to generate the full resolution color image. There are many pansharpening algorithms that can be used. Principal Component Analysis (PCA) [39], Smoothing Filter-based Intensity Modulation (SFIM) [40], Modulation Transfer Function Generalized Laplacian Pyramid (GLP) [41], MTF-GLP with High Pass Modulation (HPM) [42], Gram Schmidt (GS) [43], GS Adaptive (G...
One key advantage of compressive sensing is that only a small amount of the raw video data is transmitted or saved. This is extremely important in bandwidth constrained applications. Moreover, in some scenarios, the local processing device may not have enough processing power to handle object detection and classification and hence the heavy duty processing tasks need to be done at a remote location. Conventional compressive sensing schemes require the compressed data to be reconstructed first before any subsequent processing can begin. This is not only time consuming but also may lose important information in the process. In this paper, we present a real-time framework for processing compressive measurements directly without any image reconstruction. A special type of compressive measurement known as pixel-wise coded exposure (PCE) is adopted in our framework. PCE condenses multiple frames into a single frame. Individual pixels can also have different exposure times to allow high dynamic ranges. A deep learning tool known as You Only Look Once (YOLO) has been used in our real-time system for object detection and classification. Extensive experiments showed that the proposed real-time framework is feasible and can achieve decent detection and classification performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.