2017
DOI: 10.1007/978-3-319-57021-1_1
|View full text |Cite
|
Sign up to set email alerts
|

Challenges in Multi-modal Gesture Recognition

Abstract: This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. We began right at the start of the Kinect T M revolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal dat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
43
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3
2

Relationship

4
6

Authors

Journals

citations
Cited by 53 publications
(43 citation statements)
references
References 156 publications
(168 reference statements)
0
43
0
Order By: Relevance
“…However, this line of research has not crossed over to fields such as gesture recognition. The state of the art in gesture recognition heavily relies on data mining and visual characteristics [7], [8], yet the cognitive processes related with gesture production and perception have not been considered as a prominent source of features for gesture recognition.…”
Section: Introductionmentioning
confidence: 99%
“…However, this line of research has not crossed over to fields such as gesture recognition. The state of the art in gesture recognition heavily relies on data mining and visual characteristics [7], [8], yet the cognitive processes related with gesture production and perception have not been considered as a prominent source of features for gesture recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Although the number of observations may be of the order of 10 (Yamato et al, 1992;Hertz et al, 2006;Wasikowski and Chen, 2010), it is more common for hundreds of observations to be made (Rigoll et al, 1997;Liang and Ouhyoung, 1998;Wei et al, 2011;Jost et al, 2015;Mapari and Kharat, 2015) and sometimes even thousands (Babu, 2016;Sun et al, 2015;Zheng et al, 2015;Zhou et al, 2015). The number depends strongly on the application, which may vary from object or face recognition in images or clips (Serre et al, 2005;Huang et al, 2007;Toshev et al, 2009) to gestures or patterns coming from complex multimodal inputs (Jaimes and Sebe, 2007;Escalera et al, 2016). Some of the major challenges regarding recognition lie in representation, learning, and detection (Lee et al, 2016).…”
Section: N-shot Learningmentioning
confidence: 99%
“…In this work, we cover all the recent advancements in automatic emotion recognition from body gestures. The reader interested in emotion recognition from facial expressions or speech is encouraged to consult dedicated surveys [12], [13], [14]. In this work we refer to these only marginally and only as complements to emotional body gestures.…”
Section: Introductionmentioning
confidence: 99%