Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2012
DOI: 10.1145/2207676.2208303
|View full text |Cite
|
Sign up to set email alerts
|

Instructing people for training gestural interactive systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
307
0
5

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 325 publications
(313 citation statements)
references
References 12 publications
1
307
0
5
Order By: Relevance
“…We have chosen a random forest classifier as our base classifier, as they are fast and efficient for training, and more importantly, for testing, making them well-suited for real-time applications. Random forests have been shown to work well for action and gesture recognition in the past, either through a voting framework [20] or by direct classification [6].…”
Section: Base Classifiermentioning
confidence: 99%
See 2 more Smart Citations
“…We have chosen a random forest classifier as our base classifier, as they are fast and efficient for training, and more importantly, for testing, making them well-suited for real-time applications. Random forests have been shown to work well for action and gesture recognition in the past, either through a voting framework [20] or by direct classification [6].…”
Section: Base Classifiermentioning
confidence: 99%
“…A number of gesture recognition methods have been proposed in the literature [15,6,13], including some which operate on the human pose estimated from the Kinect [15,6]. Most of these methods follow a standard classification paradigm.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Microsoft Research Cambridge created MSRC-12 Gesture dataset [26] in 2012, which includes relevant gestures and their corresponding semantic labels for evaluating gesture recognition and detection systems. This dataset consists of 594 sequences of human skeletal body part gestures, which are totally 719,359 frames with a duration over 6 hours and 40 min at a sample rate of 30H z.…”
Section: Microsoft Research Cambridge-12 Kinect Gesture Datasetmentioning
confidence: 99%
“…Inference is performed using a Bayes filter to iteratively update a probability distribution over the model classes. Fothergill et al (2012) employ joint angles, joint angle velocities and xyz-velocities of joints as feature vector at each frame. Then gesture recognition is carried out using random forests.…”
Section: Human Action Recognition With Rgb-d Devicesmentioning
confidence: 99%