This paper presents a new technique to simultaneously estimate the depth map and the all-in-focus image of a scene, both at super-resolution, from a plenoptic camera.A plenoptic camera uses a microlens array to measure the radiance and direction of all the light rays in a scene. It is composed of n×n microlenses and each of them generates a m×m image. Previous approaches to the depth and all-infocus estimation problem processed the plenoptic image, generated a n×n×m focal stack, and were able to obtain a n×n depth map and all-in-focus image of the scene. This is a major drawback of the plenoptic camera approach to 3DTV since the total resolution of the camera n 2 m 2 is divided by m 2 to obtain a final resolution of n 2 pixels. In our approach we propose a new super-resolution focal stack that is combined with multiview depth estimation. This technique allows a theoretical resolution of approximately n 2 m 2 /4 pixels. This is an o(m 2 ) increment over previous approaches.From a practical point of view, in typical scenes we are able to increase 25 times the resolution of previous techniques. The time complexity of the algorithm makes possible to obtain real-time processing for 3DTV using appropriate hardware (GPU's or FPGA's) so it could be used in plenoptic video-cameras.
Depth range cameras are a promising solution for the 3DTV production chain. The generation of color images with their accompanying depth value simplifies the transmission bandwidth problem in 3DTV and yields a direct input for autostereoscopic displays. Recent developments in plenoptic video-cameras make it possible to introduce 3D cameras that operate similarly to traditional cameras. The use of plenoptic cameras for 3DTV has some benefits with respect to 3D capture systems based on dual stereo cameras since there is no need for geometric and color calibration or frame synchronization. This paper presents a method for simultaneously recovering depth and all-in-focus images from a plenoptic camera in near real time using graphics processing units (GPUs). Previous methods for 3D reconstruction using plenoptic images suffered from the drawback of low spatial resolution. A method that overcomes this deficiency is developed on parallel hardware to obtain near real-time 3D reconstruction with a final spatial resolution of800×600pixels. This resolution is suitable as an input to some autostereoscopic displays currently on the market and shows that real-time 3DTV based on plenoptic video-cameras is technologically feasible.
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.
Abstract. The CAFADIS camera is a new wavefront sensor (WFS) patented by the Universidad de La Laguna.CAFADIS is a system based on the concept of plenoptic camera originally proposed by Adelson and Wang [1] and its most salient feature is its ability to simultaneously measuring wavefront maps and distances to objects [2]. This makes of CAFADIS an interesting alternative for LGS-based AO systems as it is capable of measuring from an LGS-beacon the atmospheric turbulence wavefront and simultaneously the distance to the LGS beacon thus removing the need of a NGS defocus sensor to probe changes in distance to the LGS beacon due to drifts of the mesospheric Na layer. In principle, the concept can also be employed to recover 3D profiles of the Na Layer allowing for optimizations of the measurement of the distance to the LGS-beacon. Currently we are investigating the possibility of extending the plenoptic WFS into a tomographic wavefront sensor. Simulations will be shown of a plenoptic WFS when operated within an LGS-based AO system for the recovery of wavefront maps at different heights. The preliminary results presented here show the tomographic ability of CAFADIS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.