A depth-based computational photography model is proposed for all-in-focus image capture. A decomposition function, a defocus matrix, and a depth matrix are introduced to construct the photography model. The original image acquired from a camera can be decomposed into several sub-images on the basis of depth information. The defocus matrix can be deduced from the depth matrix according to the sensor defocus geometry for a thin lens model. And the depth matrix is reconstructed using the axial binocular stereo vision algorithm. This photography model adopts an energy functional minimization method to acquire the sharpest image pieces separately. The implementation of the photography method is described in detail. Experimental results for an actual scene demonstrate that our model is effective.
IntroductionThe development of photography has involved four main stages: pinhole photography, lens photography, digital photography, and computational photography [1]. Computational photography contributes to recording of a more plentiful visual experience, capturing information beyond just a simple set of pixels and making the recorded scene representation far more machine readable [2]. The most important goal of computational photography is to extend the depth of field and the field of view. The representative research interests of computational photography include areas such as the design of special optics, improvement of digital sensors, application of modern processors, development of light-field photography, and implementation of three-dimensional (3-D) imaging [3][4][5][6][7]. In the case of 3-D imaging, a multi-view imaging system with axially distributed stereo image sensing and ray back-projection has been proposed for object visualization of partially occluded [8]. A unifying framework has been presented to evaluate the lateral and axial resolution of N-ocular imaging systems under fixed resource constraints [9]. Apart from these studies, reconstructions of defocus and depth maps, which can be achieved using the defocus-from-depth method [10], are important issues in 3-D photography. This approach also describes the relationships among image defocus, sensor defocus, and scene defocus in a camera system.Here, we present a depth-based computational photography model, which is essentially a 3-D photography method, for extending the depth of field and reconstructing