2009
DOI: 10.1007/s11263-009-0273-6
|View full text |Cite
|
Sign up to set email alerts
|

HumanEva: Synchronized Video and Motion Capture Dataset and Baseline Algorithm for Evaluation of Articulated Human Motion

Abstract: While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HUMANEVA datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchroni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
947
0
11

Year Published

2009
2009
2015
2015

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 1,151 publications
(984 citation statements)
references
References 78 publications
2
947
0
11
Order By: Relevance
“…13 salient points on human body: head center, right shoulder, right elbow, right hand, left shoulder, left elbow, left hand, right hip, right knee, right foot (ankle), left hip, left knee, left foot (ankle) were manually marked for all videos in the corpus. We build upon the pose error metric proposed in [21] and define the following pose evaluation metrics for each vignette in the corpus: (a) Average error per frame as in (5), (b) Average error per marker per frame (D aepmpf ) (average of (5) for number of markers) , (c) Average error for different markers per frame as in (6).…”
Section: Methodsmentioning
confidence: 99%
“…13 salient points on human body: head center, right shoulder, right elbow, right hand, left shoulder, left elbow, left hand, right hip, right knee, right foot (ankle), left hip, left knee, left foot (ankle) were manually marked for all videos in the corpus. We build upon the pose error metric proposed in [21] and define the following pose evaluation metrics for each vignette in the corpus: (a) Average error per frame as in (5), (b) Average error per marker per frame (D aepmpf ) (average of (5) for number of markers) , (c) Average error for different markers per frame as in (6).…”
Section: Methodsmentioning
confidence: 99%
“…This is already pretty accurate but the black curves show an even better and smoother tracking with a deviation of up to just 2mm. In the second experiment we took a sequence of the HumanEVA-II benchmark [12]. Here a surface model, calibrated image sequences, and background images are provided.…”
Section: Methodsmentioning
confidence: 99%
“…It depicts the unconstrained results in red and the constrained results in black. Table 1 compares the errors (automatically evaluated [12]). Overall the tracking has been improved remarkably using the additional ground plane constraint.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations