2019
DOI: 10.1109/tvcg.2019.2898799
|View full text |Cite
|
Sign up to set email alerts
|

MegaParallax: Casual 360° Panoramas with Motion Parallax

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 40 publications
(43 citation statements)
references
References 37 publications
0
43
0
Order By: Relevance
“…Normally, 360 • images presented in VR are stereoscopic. Thus solutions for capturing and rendering stereo imagery have been proposed with both fixed camera arrays [24][25][26][27] and casual photography [28][29][30]. The raw images and video are then warped and stitched to make a 360 • panorama [31] for VR display.…”
Section: Panoramic Image and Video Creationmentioning
confidence: 99%
“…Normally, 360 • images presented in VR are stereoscopic. Thus solutions for capturing and rendering stereo imagery have been proposed with both fixed camera arrays [24][25][26][27] and casual photography [28][29][30]. The raw images and video are then warped and stitched to make a 360 • panorama [31] for VR display.…”
Section: Panoramic Image and Video Creationmentioning
confidence: 99%
“…Furthermore, the photos have often been taken at different points in time, making 3D reconstruction problematic as the illumination can differ significantly and objects may have moved in the scene [39]. Finally, another interesting capture strategy is unstructured video capture, where a user records video while moving the camera to capture multiple viewpoints [40,41,42]. In this thesis, we employ unstructured capture strategies that can easily be performed by just one camera operator, and also provide footage suitable for VR experiences with 360°coverage of the scene.…”
Section: Unstructured Capturementioning
confidence: 99%
“…However, these approaches focus on object capture, resulting in small light fields with a limited field of view. Recently, image warping has made it possible to build larger, panoramic light fields from hand-held footage, although the output camera motion is restricted to a disk [42]. Even with geometry from modern 3D reconstruction techniques, motorized capture rigs are still necessary to build light fields with a large enough range of motion for seated VR experiences [29].…”
Section: Light Fieldsmentioning
confidence: 99%
“…To overcome these limitations, some systems use geometric proxies or view interpolation techniques originally developed for imagebased rendering systems. While the earliest lightfield capture and rendering systems such as the Lumigraph [Gortler et al 1996] and Unstructured Lumigraph [Buehler et al 2001] used global 3D mesh proxies, more recent systems such as [Hedman et al 2016;Overbeck et al 2018;Penner and Zhang 2017;Zitnick et al 2004] estimate per-camera depth maps or pair-wise flow fields [Bertel et al 2019] and use these to warp the source images into the desired novel viewpoints. Other approaches focus on adaptive meshing and mesh tracking to create temporally coherent geometry [Collet et al 2015].…”
Section: Related Workmentioning
confidence: 99%
“…To achieve high rendering quality, many of these systems use a large number of images and cameras swept on an circular arc or spherical surface [Bertel et al 2019;Overbeck et al 2018], which restricts their use to still scenes. For example, the system built by [Overbeck et al 2018] takes between 30 seconds and 33 minutes to acquire a 360 • scene.…”
Section: Related Workmentioning
confidence: 99%