2011 International Conference on Computer Vision 2011
DOI: 10.1109/iccv.2011.6126356
|View full text |Cite
|
Sign up to set email alerts
|

A data-driven approach for real-time full body pose reconstruction from a depth camera

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
184
0
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 236 publications
(186 citation statements)
references
References 23 publications
1
184
0
1
Order By: Relevance
“…Thus, we exploit the approach proposed in [13] to describe the topology of point clouds. Then, a new strategy proposed in [2] for searching the end-effectors from its topology representation is integrated into the proposed system, providing a better performance than the strategy used in [13].…”
Section: Single Frame End-effector Estimationmentioning
confidence: 99%
See 3 more Smart Citations
“…Thus, we exploit the approach proposed in [13] to describe the topology of point clouds. Then, a new strategy proposed in [2] for searching the end-effectors from its topology representation is integrated into the proposed system, providing a better performance than the strategy used in [13].…”
Section: Single Frame End-effector Estimationmentioning
confidence: 99%
“…We exploit the strategy used in [2] to detect end-effectors on the graph. According to this strategy, we first search the node with the longest shortest path to the target node which is set to be the graph root initially.…”
Section: -Edges and Weightsmentioning
confidence: 99%
See 2 more Smart Citations
“…Body pose is then inferred from the body parts' centroids. Baak et al [20] combine local feature matching with a database lookup, achieving a fast and robust end-effector tracking. Zhu et al [21] propose a tracking algorithm which exploits temporal consistency to estimate the pose of a constrained human model.…”
Section: Related Workmentioning
confidence: 99%