2009
DOI: 10.1007/978-3-642-10520-3_108
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Articulated Hand Detection and Pose Estimation

Abstract: Abstract. We propose a novel method for planar hand detection from a single uncalibrated image, with the purpose of estimating the articulated pose of a generic model, roughly adapted to the current hand shape. The proposed method combines line and point correspondences, associated to finger tips, lines and concavities, extracted from color and intensity edges. The method robustly solves for ambiguous association issues, and refines the pose estimation through nonlinear optimization. The result can be used in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…The first 5 elements (f 1 ,f 2 ,f 3 ,f 4 ,f 5 ) indicate the finger shape (thumb, index, middle, ring and pinky respectively). In Fig.11, (f 1 ,f 2 ,f 3 ,f 4 ,f 5 ) = (4,1,1,4,4) means that the thumb is "close", index and middle are "open", ring and pinky are "close", respectively. The next 4 elements (f 12 ,f 13 ,f 14 ,f 15 ) describe the relation between thumb with index, middle, ring and pinky fingers.…”
Section: B Hand Appearance Fetaurementioning
confidence: 98%
See 1 more Smart Citation
“…The first 5 elements (f 1 ,f 2 ,f 3 ,f 4 ,f 5 ) indicate the finger shape (thumb, index, middle, ring and pinky respectively). In Fig.11, (f 1 ,f 2 ,f 3 ,f 4 ,f 5 ) = (4,1,1,4,4) means that the thumb is "close", index and middle are "open", ring and pinky are "close", respectively. The next 4 elements (f 12 ,f 13 ,f 14 ,f 15 ) describe the relation between thumb with index, middle, ring and pinky fingers.…”
Section: B Hand Appearance Fetaurementioning
confidence: 98%
“…Firstly, the model-based method [1,5] uses a kinematic hand model to estimate the articulated hand (i.e., joint angle, finger position), leading to a full reconstruction of the articulated hand posture. Secondly, the appearance-based method [4,9,19] uses computer vision techniques to extract important features from images, such as, point, edge, contour or silhouette, for reconstructing the hand posture, and then, for recognizing the finger-spelling.…”
Section: Introductionmentioning
confidence: 99%
“…However, extracting volumetric primitives of arbitrary objects and finding their shape parameters proved difficult [7] . More recently, generic models for specific object classes such as faces [8] , hands [9] , human bodies [10] , airplanes [11] , articulated vehicles [12] , specific classes of manufactured products [13,14] , etc. have been used for shape classification and pose estimation, the emphasis being on rapid and efficient object segmentation and parameterization rather than the possibility of universal representation.…”
Section: Introductionmentioning
confidence: 99%
“…70,71], we simplify the problem by making several assumptions on the static hand configuration and utilizing the morphology of the hand to segment it in the depth image[2]. First, we assume hand is the nearest object to the camera and constrain global hand rotation within (−15 • , 15 • ) for global rotation around the…”
mentioning
confidence: 99%