2021
DOI: 10.48550/arxiv.2107.06505
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Few-shot Neural Human Performance Rendering from Sparse RGBD Videos

Anqi Pang,
Xin Chen,
Haimin Luo
et al.

Abstract: Recent neural rendering approaches for human activities achieve remarkable view synthesis results, but still rely on dense input views or dense training with all the capture frames, leading to deployment difficulty and inefficient training overload. However, existing advances will be ill-posed if the input is both spatially and temporally sparse. To fill this gap, in this paper we propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD inputs, which exploits the temporal and spatial red… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 6 publications
0
1
0
Order By: Relevance
“…In particular, the approaches with implicit function [50,52,73] reconstruct clothed humans with fine geometry details but are restricted to only human without modeling human-object interactions. For photorealistic human performance rendering, various data representation have been explored, such as point-clouds [42,64], voxels [30], implicit representations [36,47,48,62] or hybrid neural texturing [57]. However, existing solutions rely on doom-level dense RGB sensors or are limited to human priors without considering the joint rendering of human-object interactions.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, the approaches with implicit function [50,52,73] reconstruct clothed humans with fine geometry details but are restricted to only human without modeling human-object interactions. For photorealistic human performance rendering, various data representation have been explored, such as point-clouds [42,64], voxels [30], implicit representations [36,47,48,62] or hybrid neural texturing [57]. However, existing solutions rely on doom-level dense RGB sensors or are limited to human priors without considering the joint rendering of human-object interactions.…”
Section: Introductionmentioning
confidence: 99%