2022
DOI: 10.1007/978-3-031-20068-7_11
|View full text |Cite
|
Sign up to set email alerts
|

EgoBody: Human Body Shape and Motion of Interacting People from Head-Mounted Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
35
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 46 publications
(35 citation statements)
references
References 92 publications
0
35
0
Order By: Relevance
“…We use only 3DPW to perform ablations on the reconstructed local pose using our method. The most relevant dataset that is captured with dynamic cameras and provides ground truth 3D pose in the global frame is the recently introduced EgoBody dataset [71]. Ego-Body is captured with a head-mounted camera on an interactor, who sees and interacts with a second interactee.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We use only 3DPW to perform ablations on the reconstructed local pose using our method. The most relevant dataset that is captured with dynamic cameras and provides ground truth 3D pose in the global frame is the recently introduced EgoBody dataset [71]. Ego-Body is captured with a head-mounted camera on an interactor, who sees and interacts with a second interactee.…”
Section: Resultsmentioning
confidence: 99%
“…World PA First -MPJPE (W-MPJPE) reports the MPJPE after aligning the first frames of the prediction and the ground Table 3. Comparison with the state of the art on EgoBody dataset [71]. We compare our approach with a variety of stateof-the-art methods for human mesh recovery.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Several current methods focus on positioning humans in a pre-scanned 3D scene [12,15,18] and on simultaneous estimation of human poses and objects humans interact with [7,59,62]. A different setup assumes an RGB-D sensor [68] or a moving camera [16,25,29,67] that facilitates estimating the scene geometry. Recent methods integrate physics-based constraints into monocular 3D human motion capture and mitigate foot-floor penetration and other severe artefacts [47,48].…”
Section: Scene-aware Motion Capturementioning
confidence: 99%