2021
DOI: 10.48550/arxiv.2106.03772
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Dynamics via Graph Neural Networks for Human Pose Estimation and Tracking

Abstract: Multi-person pose estimation and tracking serve as crucial steps for video understanding. Most state-of-the-art approaches rely on first estimating poses in each frame and only then implementing data association and refinement. Despite the promising results achieved, such a strategy is inevitably prone to missed detections especially in heavilycluttered scenes, since this tracking-by-detection paradigm is, by nature, largely dependent on visual evidences that are absent in the case of occlusion. In this paper,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…CPM [15,16] uses a serialized convolution architecture to express spatial information and texture information. Its network structure is divided into multiple stages.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…CPM [15,16] uses a serialized convolution architecture to express spatial information and texture information. Its network structure is divided into multiple stages.…”
Section: Related Workmentioning
confidence: 99%
“…The interaural distances at different scales show obvious differen normally distributed at the same scale, which is relatively stable and m ments of the scale factor. Finally, there is an issue regarding the non-vi ears in the images; in this case, the median of the inter-ear distance at th is used as a scale factor, i.e., 15.29 at 60 cm, 12.85 at 70 cm, and 10.81 at 8 6 reveals that the scale factor is concentrated in [10,15] pixels. The minimum unit is 1 pixel, and the corresponding distinguishable threshold is 0.1.…”
Section: Evaluation Indexmentioning
confidence: 99%
See 1 more Smart Citation