Fig. 1. We present a multi-frame super-resolution algorithm that supplants the need for demosaicing in a camera pipeline by merging a burst of raw images. We show a comparison to a method that merges frames containing the same-color channels together first, and is then followed by demosaicing (top). By contrast, our method (bottom) creates the full RGB directly from a burst of raw images. This burst was captured with a hand-held mobile phone and processed on device. Note in the third (red) inset that the demosaiced result exhibits aliasing (Moiré), while our result takes advantage of this aliasing, which changes on every frame in the burst, to produce a merged result in which the aliasing is gone but the cloth texture becomes visible.Compared to DSLR cameras, smartphone cameras have smaller sensors, which limits their spatial resolution; smaller apertures, which limits their light gathering ability; and smaller pixels, which reduces their signal-tonoise ratio. The use of color filter arrays (CFAs) requires demosaicing, which further degrades resolution. In this paper, we supplant the use of traditional demosaicing in single-frame and burst photography pipelines with a multiframe super-resolution algorithm that creates a complete RGB image directly from a burst of CFA raw images. We harness natural hand tremor, typical in handheld photography, to acquire a burst of raw frames with small offsets. These frames are then aligned and merged to form a single image with red, green, and blue values at every pixel site. This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio. Our algorithm is robust to challenging scene conditions: local motion, occlusion, or scene changes. It runs at 100 milliseconds per 12-megapixel RAW input burst frame on mass-produced mobile phones. Specifically, the algorithm is the basis of the Super-Res Zoom feature, as well as the default merge method in Night Sight mode (whether zooming or not) on Google's flagship phone.
Abstract. We examine some fundamental theoretical limits on the ability of practical digital holography ͑DH͒ systems to resolve detail in an image. Unlike conventional diffraction-limited imaging systems, where a projected image of the limiting aperture is used to define the system performance, there are at least three major effects that determine the performance of a DH system: ͑i͒ The spacing between adjacent pixels on the CCD, ͑ii͒ an averaging effect introduced by the finite size of these pixels, and ͑iii͒ the finite extent of the camera face itself. Using a theoretical model, we define a single expression that accounts for all these physical effects. With this model, we explore several different DH recording techniques: off-axis and inline, considering both the dc terms, as well as the real and twin images that are features of the holographic recording process. Our analysis shows that the imaging operation is shift variant and we demonstrate this using a simple example. We examine how our theoretical model can be used to optimize CCD design for lensless DH capture. We present a series of experimental results to confirm the validity of our theoretical model, demonstrating recovery of superNyquist frequencies for the first time. © 2009 Society of Photo-Optical Instrumentation Engineers.
a b s t r a c tWe present a novel user independent framework for representing and recognizing hand postures used in sign language. We propose a novel hand posture feature, an eigenspace Size Function, which is robust to classifying hand postures independent of the person performing them. An analysis of the discriminatory properties of our proposed eigenspace Size Function shows a significant improvement in performance when compared to the original unmodified Size Function.We describe our support vector machine based recognition framework which uses a combination of our eigenspace Size Function and Hu moments features to classify different hand postures. Experiments, based on two different hand posture data sets, show that our method is robust at recognizing hand postures independent of the person performing them. Our method also performs well compared to other user independent hand posture recognition systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.