2020
DOI: 10.1145/3386569.3392485
|View full text |Cite
|
Sign up to set email alerts
|

Immersive light field video with a layered mesh representation

Abstract: We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells that are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
176
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 227 publications
(179 citation statements)
references
References 38 publications
2
176
0
1
Order By: Relevance
“…This requires rendering from a large SSD storage RAID on a high-end GPU. In contrast, we demonstrate two systems that produce a compelling HMP experience rendered on a single GPU using intermediate representations that store just a few texture-plus-depth panoramas and can be generated in less than 2 CPU-hours (compared, e.g., to 28.5 CPU-hours for the format proposed by Broxton et al [7]). Our representations are constructed from 28 input images (compared to over four thousand for the approach in Luo et al [16]).…”
Section: Renderingmentioning
confidence: 92%
See 2 more Smart Citations
“…This requires rendering from a large SSD storage RAID on a high-end GPU. In contrast, we demonstrate two systems that produce a compelling HMP experience rendered on a single GPU using intermediate representations that store just a few texture-plus-depth panoramas and can be generated in less than 2 CPU-hours (compared, e.g., to 28.5 CPU-hours for the format proposed by Broxton et al [7]). Our representations are constructed from 28 input images (compared to over four thousand for the approach in Luo et al [16]).…”
Section: Renderingmentioning
confidence: 92%
“…Since the cameras are all in one horizontal plane, such rigs do not catch enough information to support vertical head motion. With this in mind, spherical camera rigs have been proposed more recently [6], [7]. However, a seated VR viewer typically has a much larger horizontal than vertical range of head motion [8].…”
Section: Background Capture Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…The researchers are still mainly focused on multiview video plus depth (MVD) representation [35], therefore, further considerations presented in this paper also concern MVD. Of course, multi-plane images (MPI) [7] and their variants [22] are gaining much attention, nevertheless, in these representations, depth information is still present, but in another form.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, the diffraction or interferenceof light can degrade the quality of the results of the reconstruction, rendering, or depth estimation. The quality of the light field imaging depends on both the number of images and the density of the light field, so the interpolating view is one of the most important tasks for the light field camera [37]. Interpolation is considered a simple and efficient method to increase the resolution of an image, so it is widely used in image processing.…”
Section: Introductionmentioning
confidence: 99%