2019
DOI: 10.1109/tip.2019.2923323
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for Learning Depth From a Flexible Subset of Dense and Sparse Light Field Views

Abstract: Light field is a type of image data that captures the 3D scene information by recording light rays emitted from a scene at various orientations. It offers a more immersive perception than classic 2D images but at the cost of huge data volume. In this paper, we draw inspiration from the visual characteristics of Sub-Aperture Images (SAIs) of light field and design a compact neural network representation for the light field compression task. The network backbone takes randomly initialized noise as input and is s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
91
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 124 publications
(92 citation statements)
references
References 67 publications
1
91
0
Order By: Relevance
“…The authors of [10] utilize deep learning methods to estimate depth maps for light-fields. Using a fine tuned FlowNet 2.0 network, several candidate depth maps are estimated between the target view and other horizontal or vertical views of variable distance.…”
Section: Depth Estimationmentioning
confidence: 99%
“…The authors of [10] utilize deep learning methods to estimate depth maps for light-fields. Using a fine tuned FlowNet 2.0 network, several candidate depth maps are estimated between the target view and other horizontal or vertical views of variable distance.…”
Section: Depth Estimationmentioning
confidence: 99%
“…We can observe that our method always yields the most accurate optical flows (see Table I), and disparity maps (see Table II). Our method is only outperformed by the FDE method [28] for the disparity of the central view on Bamboo2 clean & final and Temple1 clean. However, even for these light fields, our method provides better average results than FDE when considering the disparity maps of all the views, which indicates a better consistency between views.…”
Section: Normalized Costmentioning
confidence: 89%
“…A deep neural network, called Dispnet, is proposed in [26] based on the optical flow estimation network Flownet2 [27] but computing 1D correlation instead of 2D correlation to be better suited for disparity estimation. The authors in [28] propose a learning based depth estimation framework suitable for both densely and sparsely sampled light fields, that can learn depth maps for every viewpoint from any subset of input views.…”
Section: A Scene Depth Estimation From Light Fieldsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our training data includes 94 synthetic light field scenes [28,29] and 100 real world scenes captured by a Lytro Illum camera [9]. The model is first trained on synthetic light fields, and then further finetuned with real ones.…”
Section: Training Detailsmentioning
confidence: 99%