2019
DOI: 10.1007/978-3-030-11009-3_18
|View full text |Cite
|
Sign up to set email alerts
|

Deep Depth from Defocus: How Can Defocus Blur Improve 3D Estimation Using Dense Neural Networks?

Abstract: Depth estimation is of critical interest for scene understanding and accurate 3D reconstruction. Most recent approaches in depth estimation with deep learning exploit geometrical structures of standard sharp images to predict corresponding depth maps. However, cameras can also produce images with defocus blur depending on the depth of the objects and camera settings. Hence, these features may represent an important hint for learning to predict depth. In this paper, we propose a full system for single-image dep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 42 publications
(64 citation statements)
references
References 43 publications
(86 reference statements)
0
62
2
Order By: Relevance
“…From the lightfield images, we follow the procedure of [31] to generate the all-in-focus and shallow DoF images, and split the dataset into 3143 and 300 images for train and test. DSLR dataset [3] This dataset contains 110 images and ground truth depth from indoor scenes, with 81 images for training and 29 images for testing, and 34 images from outdoor scenes without ground truth depth. Each scene is ac- Make3D [27,28] The Make3D benchmark contains 534 RGB-depth pairs, split into 400 pairs for training and 134 for testing.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…From the lightfield images, we follow the procedure of [31] to generate the all-in-focus and shallow DoF images, and split the dataset into 3143 and 300 images for train and test. DSLR dataset [3] This dataset contains 110 images and ground truth depth from indoor scenes, with 81 images for training and 29 images for testing, and 34 images from outdoor scenes without ground truth depth. Each scene is ac- Make3D [27,28] The Make3D benchmark contains 534 RGB-depth pairs, split into 400 pairs for training and 134 for testing.…”
Section: Methodsmentioning
confidence: 99%
“…where D o is the distance between an object to the lens plane, and A = F/N where N is what is known as the f-number of the camera. While CoC is usually measured in millimeters (C mm ), we transform its size to pixels by considering a camera pixel-size of p = 5.6µm as in [3], and a camera output scale s, which is the ratio between sensor size and output image size. The final CoC size in pixels C is computed as follows:…”
Section: Depth From Defocusmentioning
confidence: 99%
See 2 more Smart Citations
“…However, performance is highly dependent on the training dataset. To address this issue, several recent approaches have incorporated physical camera parameters into their image formation model, including focal length [14] and defocus blur [1], to implicitly encode 3D information into a 2D image. We build on these previous insights and perform a significantly more extensive study that evaluates several types of fixed lenses as well as fully optimizable camera lenses for monocular depth estimation and 3D object detection tasks.…”
Section: Related Workmentioning
confidence: 99%