2022
DOI: 10.1007/978-3-031-20068-7_16
|View full text |Cite
|
Sign up to set email alerts
|

Neural Capture of Animatable 3D Human from Monocular Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…Mildenhall et al [36,68] pioneered NeRFs for representing static scenes with a color and density field without requiring any 3D ground truth supervision. Recently this approach has been extended to reconstruct clothed humans as well [43,58,12,67,24,66,30,65,23,19,62]. These approaches use SMPL as a prior to unpose the human body across multiple frames by transforming the rays from observation space to canonical space which is then rendered using a NeRF.…”
Section: Human Reconstruction Via Optimizationmentioning
confidence: 99%
“…Mildenhall et al [36,68] pioneered NeRFs for representing static scenes with a color and density field without requiring any 3D ground truth supervision. Recently this approach has been extended to reconstruct clothed humans as well [43,58,12,67,24,66,30,65,23,19,62]. These approaches use SMPL as a prior to unpose the human body across multiple frames by transforming the rays from observation space to canonical space which is then rendered using a NeRF.…”
Section: Human Reconstruction Via Optimizationmentioning
confidence: 99%
“…Avatar reconstruction of the human body. Prominent techniques for human body reconstruction necessitate expensive data acquisition, e.g., multi-view RGB video [10,21,28,29,39,40,53,54,63,73,81,86,88,89], monocular RGB video [8,19,20,31,34,35,70,76,83,87], textured scan video [22,64], or image set [77]. Also, many of them utilize NeRF or its variations as the 3D representation and craft a motion field to connect the gap between body articulation and the canonical NeRF space.…”
Section: Related Workmentioning
confidence: 99%