2022
DOI: 10.1007/978-3-031-20065-6_26
|View full text |Cite
|
Sign up to set email alerts
|

AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion Sensing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 58 publications
(29 citation statements)
references
References 52 publications
1
28
0
Order By: Relevance
“…Each GRU takes the features from the previous layer, as well as the hidden states (the output of the GRU) of the previous frame h t –1 to estimate the hidden states of the current frame h t . This is in contrast to the approaches by Dittadi et al [DDC*21] and Jiang et al [JSQ*22] which use a window of previous sparse tracking points as the input to the encoder. With the GRUs, we intelligently accumulate the information of all the past observations to reduce the ill‐posedness of the problem and produce temporally coherent motions.…”
Section: Methodsmentioning
confidence: 91%
See 2 more Smart Citations
“…Each GRU takes the features from the previous layer, as well as the hidden states (the output of the GRU) of the previous frame h t –1 to estimate the hidden states of the current frame h t . This is in contrast to the approaches by Dittadi et al [DDC*21] and Jiang et al [JSQ*22] which use a window of previous sparse tracking points as the input to the encoder. With the GRUs, we intelligently accumulate the information of all the past observations to reduce the ill‐posedness of the problem and produce temporally coherent motions.…”
Section: Methodsmentioning
confidence: 91%
“…4 our results with different random vectors are drastically different for the less constrained lower body, but are similar for the upper body. This is one of the major advantages of our system compared to the approaches using deterministic training [YKL21; JSQ*22]. In their case, the network produces an average pose for the highly ambiguous poses, which is often implausible.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The feature extractors of FHD are pretrained by B2H and TED Hands datasets, respectively. • MPJRE: We leverage the Mean Per Joint Rotation Error [•] (MPJRE) [14] to measure the absolute distance between predicted 3D representation joints and pseudo ground truth. • Diversity: To verify the diversity of sampled hand gestures, we calculate the average feature distance between 500 random combined sequential pairs [20,24].…”
Section: Objective Functionsmentioning
confidence: 99%
“…However, these solutions tend to suffer from occlusion and are often obtrusive to wear. Alternative approaches directly estimate the full-body pose based solely on the available temporal motion information of the user's head and hands [2,28,61]. Moreover, various specialized mobile hand-held or body-worn systems have been built for tracking that make use of sensing modalities such as magnetic [11], mechanical [41], or acoustic [29,57] sensing.…”
Section: Body Capture Using Worn Sensorsmentioning
confidence: 99%