2017
DOI: 10.48550/arxiv.1704.01085
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Depth From Focus

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…Second, dead zones can be overcome using several images with various in-focus planes. In a single snapshot context, this can be obtained with unconventional optics such as a plenoptic camera [32] or a lens with chromatic aberration [33,12], but both at the cost of image quality (low resolution or chromatic aberration).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Second, dead zones can be overcome using several images with various in-focus planes. In a single snapshot context, this can be obtained with unconventional optics such as a plenoptic camera [32] or a lens with chromatic aberration [33,12], but both at the cost of image quality (low resolution or chromatic aberration).…”
Section: Related Workmentioning
confidence: 99%
“…Learning depth from defocus blur. The existence of common datasets for depth estimation [35,1,32], containing pairs of RGB images and corresponding depth maps, facilitates the creation of synthetic defocused images using real camera parameters. Hence, a deep learning approach can be used.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the lens response is depth dependent (due to different behaviour of in-and out-of-focus), this feature can be employed for depth estimation. Under this category one may find either depth from focus/defocus [Darrell and Wohn 1988;Schechner and Kiryati 2000;Trouvé et al 2013;Suwajanakorn et al 2015;Carvalho et al 2018;Gur and Wolf 2019], and depth from a focal stack [Lin et al 2013;Lin et al 2015;Hazirbas et al 2018]. A recent work [Guo et al 2017] attempts to combine two focal stacks (acquired using a light-field stereo pair) to achieve improved depth estimation.…”
Section: Monocular Depth Estimationmentioning
confidence: 99%
“…Such overloading of terminology can create lasting confusion. New machine learning papers referring to deconvolution might be (i) invoking its original meaning, (ii) describing upconvolution, or (iii) attempting to resolve the confusion, as in [28], which awkwardly refers to "upconvolution (deconvolution)".…”
Section: Overloading Technical Terminologymentioning
confidence: 99%