2019
DOI: 10.1007/978-3-030-20893-6_33
|View full text |Cite
|
Sign up to set email alerts
|

Deep Depth from Focus

Abstract: Depth from focus (DFF) is one of the classical ill-posed inverse problems in computer vision. Most approaches recover the depth at each pixel based on the focal setting which exhibits maximal sharpness. Yet, it is not obvious how to reliably estimate the sharpness level, particularly in low-textured areas. In this paper, we propose 'Deep Depth From Focus (DDFF)' as the first end-to-end learning approach to this problem. One of the main challenges we face is the hunger for data of deep neural networks. In order… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
66
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 45 publications
(69 citation statements)
references
References 41 publications
0
66
0
Order By: Relevance
“…An alternative strategy to constraining the geometry of the scene is to vary the camera's focus. Using this "depth from (de)focus" [24] approach, depth can be estimated from focal stacks using classic vision techniques [53] or deep learning approaches [26]. Focus can be made more informative in depth estimation by manually "coding" the aperture of a camera [40], thereby causing the camera's circle of confusion to more explicitly encode scene depth.…”
Section: Related Workmentioning
confidence: 99%
“…An alternative strategy to constraining the geometry of the scene is to vary the camera's focus. Using this "depth from (de)focus" [24] approach, depth can be estimated from focal stacks using classic vision techniques [53] or deep learning approaches [26]. Focus can be made more informative in depth estimation by manually "coding" the aperture of a camera [40], thereby causing the camera's circle of confusion to more explicitly encode scene depth.…”
Section: Related Workmentioning
confidence: 99%
“…Second, dead zones can be overcome using several images with various in-focus planes. In a single snapshot context, this can be obtained with unconventional optics such as a plenoptic camera [32] or a lens with chromatic aberration [33,12], but both at the cost of image quality (low resolution or chromatic aberration).…”
Section: Related Workmentioning
confidence: 99%
“…Learning depth from defocus blur. The existence of common datasets for depth estimation [35,1,32], containing pairs of RGB images and corresponding depth maps, facilitates the creation of synthetic defocused images using real camera parameters. Hence, a deep learning approach can be used.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Classic methods like Shape from Shading [27,38] and depth from focus [9,32] elaborated physical and mathematical property about light shading and focal setting at each pixel. Recent works extended classic methods by machine learning, like deep depth from focus [12] and deep estimation based on fourier domain analysis [19]. Fully connected convolution (FCN) networks are used to predict depth map [18] or refine coarse-scale depth value [6].…”
Section: Depth Estimationmentioning
confidence: 99%