2021
DOI: 10.48550/arxiv.2104.05969
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dynamic Fusion Network For Light Field Depth Estimation

Abstract: Focus-based methods have shown promising results for the task of depth estimation. However, most existing focus-based depth estimation approaches depend on maximal sharpness of the focal stack. Out-of-focus information in the focal stack poses challenges for this task. In this paper, we propose a dynamically multi-modal learning strategy which incorporates RGB data and the focal stack in our framework. Our goal is to deeply excavate the spatial correlation in the focal stack by designing the spatial-correlatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 43 publications
0
2
0
Order By: Relevance
“…Other deep learning-based approaches focus on multiple sensor modalities to estimate the depth of the scene. [67] use LiDAR point clouds in combination with stereo images, [68] use monocular RGB images combined with sparse LiDAR point clouds, [69] input monocular RGB combined with a depth map and [70] combine a single RGB image with a focal stack.…”
Section: B Monocular Depth Estimationmentioning
confidence: 99%
“…Other deep learning-based approaches focus on multiple sensor modalities to estimate the depth of the scene. [67] use LiDAR point clouds in combination with stereo images, [68] use monocular RGB images combined with sparse LiDAR point clouds, [69] input monocular RGB combined with a depth map and [70] combine a single RGB image with a focal stack.…”
Section: B Monocular Depth Estimationmentioning
confidence: 99%
“…As a result, it can be viewed as an array of images captured by a grid of cameras. Compared to RGB images captured by a regular camera or depth maps acquired by a depth sensor, the light field data acquired by a plenoptic camera records more comprehensive and complete information about natural scenes, covering, for example, depth information [38][39][40][41][42][43][44], focusness cues [5,42], and angular changes [42,45]. Therefore, light field data can benefit SOD in a number of ways.…”
Section: Introductionmentioning
confidence: 99%