2019
DOI: 10.1145/3306346.3322980
|View full text |Cite
|
Sign up to set email alerts
|

Local light field fusion

Abstract: demonstrate our approach's practicality with an augmented reality smartphone app that guides users to capture input images of a scene and viewers that enable realtime virtual exploration on desktop and mobile platforms.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
205
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 787 publications
(206 citation statements)
references
References 49 publications
1
205
0
Order By: Relevance
“…A good baseline for image-based rendering is the unstructured lumigraph rendering (ULR) algorithm [2]: it efficiently demonstrates the potential of view-dependent rendering algorithms and has consistently been used by recent works as a reference point for comparison [1,10,11,15]. Therefore, we decided to implement our own version of ULR using vertex/fragment shaders.…”
Section: View-dependent Rendering Of a Global Mesh 311 Our Implemenmentioning
confidence: 99%
See 2 more Smart Citations
“…A good baseline for image-based rendering is the unstructured lumigraph rendering (ULR) algorithm [2]: it efficiently demonstrates the potential of view-dependent rendering algorithms and has consistently been used by recent works as a reference point for comparison [1,10,11,15]. Therefore, we decided to implement our own version of ULR using vertex/fragment shaders.…”
Section: View-dependent Rendering Of a Global Mesh 311 Our Implemenmentioning
confidence: 99%
“…of weights thus have to be passed in the interpolators. Finally, unstructured lumigraph rendering typically suffers from noticeable ghosting artifacts [15,23], linked to inconsistencies between the image data and the reconstructed camera setup and 3D mesh. This remains a limitation in our current implementation.…”
Section: Obstacles and Limitationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep neural networks [Bengio et al 2012;Krizhevsky et al 2012;LeCun et al 2015] have shown remarkable achievements in encoding highly complex functions in a non-linear fashion. Recent works [Lombardi et al 2019;Mildenhall et al 2019;Sitzmann et al 2019] used neural networks to learn implicit representations from imagery without requiring any explicit geometry. Simple MLP-based architectures have also shown remarkable improvements in both view synthesis and in encoding complex light interactions [Kallweit et al 2017].…”
Section: Related Workmentioning
confidence: 99%
“…While our reconstruction pipeline is based on traditional coarseto-fine, hierarchical patch matching, one could also apply one of the more recent machine learning-based approaches such as [Chabra et al 2019;Donne and Geiger 2019;Mildenhall et al 2019;Tonioni et al 2019;Xu et al 2019;Yao et al 2019b;Zhang et al 2019]. Since we are open sourcing our multi-view stereo datasets, we hope that others will investigate whether further improvements could be obtained using these newer approaches.…”
Section: D Reconstruction / Depth Estimationmentioning
confidence: 99%