2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.246
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Synthesize a 4D RGBD Light Field from a Single Image

Abstract: We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
175
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 222 publications
(175 citation statements)
references
References 33 publications
0
175
0
Order By: Relevance
“…Thus camera parallax could be an issue that affects the system performance. We show in our studies below by using simulation data from available KITTI [31], Flower [35], LFVideo [37], Stanford light field [1] datasets, and real data captured by our dual camera system with various parallax settings to demonstrate the robustness of our method.…”
Section: Efficiencymentioning
confidence: 99%
See 3 more Smart Citations
“…Thus camera parallax could be an issue that affects the system performance. We show in our studies below by using simulation data from available KITTI [31], Flower [35], LFVideo [37], Stanford light field [1] datasets, and real data captured by our dual camera system with various parallax settings to demonstrate the robustness of our method.…”
Section: Efficiencymentioning
confidence: 99%
“…Comparison Using Simulation Data: We test our AWnet on Flower [35], LFVideo [37] and Stanford light field (Lego Gantry) [1] datasets following the same configuration in [43]. The Flower and LFVideo datasets are light field images captured using Lytro ILLUM camera.…”
Section: Efficiencymentioning
confidence: 99%
See 2 more Smart Citations
“…Classical image quality metrics, like PSNR and SSIM, do not fully frame the perceptual quality of refocused images [6]. Because there is no consensus on what, quantitatively, makes for a good refocused images (bokeh-wise but also in terms of object boundaries and physical blur accuracy), subjective assessments are often used [19] and some papers exclusively focus on qualitative assessments [54,52,46,51,45]. In order to provide quantitative evaluation of our results, in addition to classical metrics, we propose to utilize a perceptual metric commonly used by the super resolution community [7,6], the NIQE score [34].…”
Section: Experimental Evaluationmentioning
confidence: 99%