2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00565
|View full text |Cite
|
Sign up to set email alerts
|

SimulCap : Single-View Human Performance Capture With Cloth Simulation

Abstract: This paper proposes a new method for live freeviewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
53
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 101 publications
(53 citation statements)
references
References 81 publications
0
53
0
Order By: Relevance
“…3D human body/cloth reconstruction. 3D body shapes/cloth are modeled from RGB/RGBD cameras in [49,43,42,15,2,1,47,48] while garment and surface reconstruction methods from images are addressed in surface/wrinkle reconstruction from images [9,3,35]. Moreover, generative models reconstruct cloths in [25,16].…”
Section: Point Cloud and Mesh Processingmentioning
confidence: 99%
“…3D human body/cloth reconstruction. 3D body shapes/cloth are modeled from RGB/RGBD cameras in [49,43,42,15,2,1,47,48] while garment and surface reconstruction methods from images are addressed in surface/wrinkle reconstruction from images [9,3,35]. Moreover, generative models reconstruct cloths in [25,16].…”
Section: Point Cloud and Mesh Processingmentioning
confidence: 99%
“…With the advent of RGB-D cameras, a number of methods have been proposed to reconstruct deformable objects from monocular depth video [7]- [16]. KinectFusion [41] showed impressive results for static scenes by tracking the camera's motion with simple model-to-frame registration.…”
Section: Related Workmentioning
confidence: 99%
“…KinectFusion [41] showed impressive results for static scenes by tracking the camera's motion with simple model-to-frame registration. This seminal work was extended to deformable and dynamic objects [7] and became the basis for other subsequent works [5], [6], [8]- [16], [42]. For robust model-to-frame tracking, the methods rely on careful and slow motions [7], multiple RGB-D cameras [5], color feature tracking [8], partial resetting of the model [6], and an additional high-frame rate RGB-D camera [42].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations