2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00724
|View full text |Cite
|
Sign up to set email alerts
|

Three-Dimensional Reconstruction of Human Interactions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
36
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(36 citation statements)
references
References 34 publications
0
36
0
Order By: Relevance
“…Datasets for 3D human motion and interactions. A large number of datasets focus on 3D human pose and motion from third-person views [16,23,27,30,31,43,52,57,71,74,75,84,90]. For example, Human3.6M [27] and AMASS [51] use optical marker-based motion capture to collect large amounts of high-quality 3D motion sequences; they are limited to constrained studio setups and images -when available -are polluted by marker data.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Datasets for 3D human motion and interactions. A large number of datasets focus on 3D human pose and motion from third-person views [16,23,27,30,31,43,52,57,71,74,75,84,90]. For example, Human3.6M [27] and AMASS [51] use optical marker-based motion capture to collect large amounts of high-quality 3D motion sequences; they are limited to constrained studio setups and images -when available -are polluted by marker data.…”
Section: Related Workmentioning
confidence: 99%
“…The Panoptic Studio dataset [30][31][32]79] reconstructs interactions between people using a multi-view camera system; it provides annotations for body and hand 3D joints plus facial landmarks. CHI3D [16] focuses on close human-human contacts, using a motion capture system to extract ground-truth 3D skeletons. 3DPW [74] reconstructs both the 3D shape and motion of people "in-the-wild" by fitting SMPL [48] to IMU data and RGB images captured with a hand-held camera, without reconstructing the 3D environment.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There is a separate research direction on domainspecific mesh deformation representation, e.g., SMPL/S-TAR human models (Loper et al 2015;Osman, Bolkart, and Black 2020), wrinkle-enhanced cloth meshes (Lahner, Cremers, and Tung 2018), and skeletal skinning meshes (Xu et al 2020). There are even prior works (Fieraru et al 2020(Fieraru et al , 2021Muller et al 2021) on collision detection and handling for human bodies. These domain-specific methods are typically more accurate than our representation, but by assuming general meshes, our representation can be applied to multiple domains as shown in Sec.…”
Section: Related Workmentioning
confidence: 99%
“…Véges et al [38] make use of a monocular depth prediction network pretrained on various indoor and outdoor datasets to help with absolute person distance estimation. Finally, some recent works also consider the depth relations among people: Jiang et al [39] optimize the depth ordering by occlusion cues, while Fieraru et al [40] explicitly localize contact points between people to help with coherent reconstruction. In contrast, we perform our estimation for each person independently.…”
Section: Scale and Distance Estimationmentioning
confidence: 99%