2022
DOI: 10.48550/arxiv.2202.12825
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions

Abstract: 4D modeling of human-object interactions is critical for numerous applications. However, efficient volumetric capture and rendering of complex interaction scenarios, especially from sparse inputs, remain challenging. In this paper, we propose NeuralFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors. It marries traditional non-rigid fusion with recent neural implicit modeling and blending advances, where the captured humans and objects are layerwise di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 55 publications
0
4
0
Order By: Relevance
“…In the photo-realistic novel view synthesis and 3D scene modeling domain, differentiable neural rendering based on various data proxies achieves impressive results and becomes more and more popular. Various data representations are adopted to obtain better performance and characteristics, such as point-clouds [Aliev et al 2020;Suo et al 2020;], voxels [Lombardi et al 2019], texture meshes Shysheya et al 2019;Thies et al 2019] or implicit functions [Kellnhofer et al 2021;Mildenhall et al 2020;Park et al 2019] and hybrid neural blending [Jiang et al 2022a;Sun et al 2021;Suo et al 2021]. More recently, [Li et al 2020;Park et al 2020;Pumarola et al 2021;] extend neural radiance field [Mildenhall et al 2020] into the dynamic setting.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In the photo-realistic novel view synthesis and 3D scene modeling domain, differentiable neural rendering based on various data proxies achieves impressive results and becomes more and more popular. Various data representations are adopted to obtain better performance and characteristics, such as point-clouds [Aliev et al 2020;Suo et al 2020;], voxels [Lombardi et al 2019], texture meshes Shysheya et al 2019;Thies et al 2019] or implicit functions [Kellnhofer et al 2021;Mildenhall et al 2020;Park et al 2019] and hybrid neural blending [Jiang et al 2022a;Sun et al 2021;Suo et al 2021]. More recently, [Li et al 2020;Park et al 2020;Pumarola et al 2021;] extend neural radiance field [Mildenhall et al 2020] into the dynamic setting.…”
Section: Related Workmentioning
confidence: 99%
“…[Hu et al 2021;Peng et al 2021a,b;] utilize the human prior SMPL [Loper et al 2015] model as an anchor and use linear blend skinning algorithm to warp the radiance field. Furthermore, [Jiang et al 2022a;Sun et al 2021] extend the dynamic neural rendering and blending into the humanobjection interaction scenarios. However, for the vast majority of approaches above, dense spatial views still are required for high fidelity novel view rendering.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations