Traditional depth from focus (DFF) methods obtain depth image from a set of differently focused color images. They detect in-focus region at each image by measuring the sharpness of observed color textures. However, estimating sharpness of arbitrary color texture is not a trivial task especially when there are limited color or intensity variations in an image. Recent deep learning based DFF approaches have shown that the collective estimation of sharpness in a set of focus images based on large body of training samples outperforms traditional DFF with challenging target objects with textureless or glaring surfaces. In this article, we propose a deep spatial–focal convolutional neural network that encodes the correlations between consecutive focused images that are fed to the network in order. In this way, our neural network understands the pattern of blur changes of each image pixel from a volumetric input of spatial–focal three-dimensional space. Extensive quantitative and qualitative evaluations on existing three public data sets show that our proposed method outperforms prior methods in depth estimation.
Estimating the 3D shape of a scene from differently focused set of images has been a practical approach for 3D reconstruction with color cameras. However, reconstructed depth with existing depth from focus (DFF) methods still suffer from poor quality with textureless and object boundary regions. In this paper, we propose an improved depth estimation based on depth from focus iteratively refining 3D shape from uniformly focused image set (UFIS). We investigated the appearance changes in spatial and frequency domains in iterative manner. In order to achieve sub-frame accuracy in depth estimation, optimal location of focused frame in DFF is estimated by fitting a polynomial curve on the dissimilarity measurements. In order to avoid wrong depth values on texture-less regions we propose to build a confidence map and use it to identify erroneous depth estimations. We evaluated our method on public and our own datasets obtained from different types of devices, such as smartphones, medical, and normal color cameras. Quantitative and qualitative evaluations on various test image sets show promising performance of the proposed method in depth estimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.