In multiview video, a number of cameras capture the same scene from different viewpoints. There can be significant variations in the color of views captured with different cameras, which negatively affects performance when the videos are compressed with inter-view prediction. In this letter, a method is proposed for correcting the color of multiview video sets as a preprocessing step to compression. Unlike previous work, where one of the captured views is used as the color reference, we correct all views to match the average color of the set of views. Block-based disparity estimation is used to find matching points between all views in the video set, and the average color is calculated for these matching points. A least-squares regression is performed for each view to find a function that will make the view most closely match the average color. Experimental results show that when multiview video is compressed with Joint Multiview Video Model, the proposed method increases compression efficiency by up to 1.0 dB in luma peak signalto-noise ratio (PSNR) compared to compressing the original uncorrected video.Index Terms-Color correction, disparity estimation, multiview video coding (MVC), video processing.
Most consumer digital cameras use a single light sensor which captures color information using a color filter array (CFA). This produces a mosaic image, where each pixel location contains a sample of only one of three colors, either red, green or blue. The two missing colors at each pixel location must be interpolated from the surrounding samples in a process called demosaicking. The conventional approach to compressing video captured with these devices is to first perform demosaicking and then compress the resulting full-color video using standard methods. In this paper two methods for compressing CFA video prior to demosaicking are proposed. In our first method, the CFA video is directly compressed with the H.264 video coding standard in 4:2:2 sampling mode. Our second method uses a modified version of H.264, where motion compensation is altered to take advantage of the properties of CFA data. Simulations show both proposed methods give better compression efficiency than the demosaick-first approach at high bit rates, and thus are suitable for applications, such as digital camcorders, where high quality video is required. Index Terms-Bayer pattern, color demosaicking, H264/AVC, single-sensor digital video cameras, video coding. I. INTRODUCTION M OST commercial digital cameras use a single light sensor which is monochromatic in nature. In order to capture RGB color information, a color filter array (CFA) is used, which produces a mosaic image where each pixel location contains either a red, green or blue sample. The Bayer pattern CFA [1] is commonly used, which captures pixels in groups of four; each group containing two green, one red and one blue sample (Fig. 1). More green samples are captured than red or blue because the human visual system is more sensitive to the green portion of the light spectrum. Other CFA patterns are possible [2] but the Bayer pattern is considered here due to its commercial importance. In order to form a full color image or video from CFA data, the color data is interpolated in a process called demosaicking Manuscript
When images are stitched together to form a panorama there is often color mismatch between the source images due to vignetting and differences in exposure and white balance between images. In this paper a low complexity method is proposed to correct vignetting and differences in color between images, producing panoramas that look consistent across all source images. Unlike most previous methods which require complex non-linear optimization to solve for correction parameters, our method requires only linear regressions with a low number of parameters, resulting in a fast, computationally efficient method. Experimental results show the proposed method effectively removes vignetting effects and produces images that are highly visually consistent in color and brightness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.