2019
DOI: 10.1109/access.2019.2900153
|View full text |Cite
|
Sign up to set email alerts
|

Computational Light Field Generation Using Deep Nonparametric Bayesian Learning

Abstract: In this paper, we present a deep nonparametric Bayesian method to synthesize a light field from a single image. Conventionally, light-field capture requires special optical architecture, and the gain in angular resolution often comes at the expense of a reduction in spatial resolution. Techniques for computationally generating the light field from a single image can be expanded further to a variety of applications, ranging from microscopy and materials analysis to vision-based robotic control and autonomous ve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
1

Relationship

4
3

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 46 publications
0
5
0
Order By: Relevance
“…Angular super-resolution for LF is also known as view synthesis. Many techniques (Kalantari, Wang, and Ramamoorthi 2016;Meng et al 2019b;Wanner and Goldluecke 2014) take advantage of the disparity map to warp the existing SAIs to novel views. For instance, (Pearson, Brookes, and Dragotti 2013) introduced a layer-based synthesis method to render arbitrary views by using probabilistic interpolation and calculating the depth layer information.…”
Section: Angular Super-resolutionmentioning
confidence: 99%
See 1 more Smart Citation
“…Angular super-resolution for LF is also known as view synthesis. Many techniques (Kalantari, Wang, and Ramamoorthi 2016;Meng et al 2019b;Wanner and Goldluecke 2014) take advantage of the disparity map to warp the existing SAIs to novel views. For instance, (Pearson, Brookes, and Dragotti 2013) introduced a layer-based synthesis method to render arbitrary views by using probabilistic interpolation and calculating the depth layer information.…”
Section: Angular Super-resolutionmentioning
confidence: 99%
“…Compared to a 2D imaging system, the plenoptic camera not only captures the accumulated intensity of a light ray at each point in space, but also provides the directional radiance information. Together they form the light field (LF), which has shown advantages over 2D imagery in problems such as disparity estimation (Jeon et al 2015;Sun et al 2016) or 3D reconstruction (Heber, Yu, and Pock 2017) of a scene, generation of images for a novel viewpoint (Kalantari, Wang, and Ramamoorthi 2016;Meng et al 2019b), and refocusing (Mitra and Veeraraghavan 2012).…”
mentioning
confidence: 99%
“…The paired t-test is a common way to test whether the difference between two measurements over various data sets is non-random. [43]. Let d i be the difference between the performance scores of the two detectors on the i-th out of N scores of recall and precision (N = 18 as shown in Table 2).…”
Section: E Performance Analysismentioning
confidence: 99%
“…Recently, deep learning has been proved to be a powerful technique in a wide range of applications [14], [15]. With the availability of the light field dataset [16], methods based on the convolutional neural networks (CNNs) have been successfully applied to light field super-resolution [17], [18]. Yoon et al [19] establish the first deep learning framework LFCNN for both spatial and angular super-resolution but do not exploit the correlation among adjacent views.…”
Section: Introductionmentioning
confidence: 99%