Proceedings of the 2015 ACM on International Conference on Multimodal Interaction 2015
DOI: 10.1145/2818346.2830590
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 99 publications
(52 citation statements)
references
References 28 publications
0
52
0
Order By: Relevance
“…However, more recently deep and recurrent neural networks have shown promising capabilities, outperforming previous results in dealing with rich multimodal data in areas like facial expression recognition (Kim et al 2015) or speech recognition, both in lab settings and even in the wild (Dhall et al 2015). …”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, more recently deep and recurrent neural networks have shown promising capabilities, outperforming previous results in dealing with rich multimodal data in areas like facial expression recognition (Kim et al 2015) or speech recognition, both in lab settings and even in the wild (Dhall et al 2015). …”
Section: Related Workmentioning
confidence: 99%
“…In terms of data processing, the initial forays into MMLA used relatively simple machine learning algorithms to build models of the phenomena under study (Ochoa et al, 2013). However, more recently, deep and recurrent neural networks (RNNs) have shown promising capabilities, outperforming previous results in dealing with rich multimodal data in areas such as facial expression recognition (Kim, Lee, Roh, & Lee, 2015) or speech recognition, both in lab settings and even in the wild (Dhall, Ramana Murthy, Goecke, Joshi, & Gedeon, 2015).…”
Section: Multimodal Analytics and Professional Activity Detectionmentioning
confidence: 99%
“…These trends are exemplified in the annual competitions Emotion Recognition in the Wild (EmotiW) [12] and Audio Video Emotion Challenge (AVEC) [13]. Since 2010, deep learning methods have been applied to affect recognition problems across multiple modalities and led to improvements in accuracy, including winning performances at EmotiW [14], [15], [16] and AVEC [17], [18], [19].…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, the teaching practice categorization used in this study is only one example, and other characterizations are also possible, especially for researchers or practitioners interested in concrete pedagogical approaches such as collaborative learning, or inquiry-based learning (which will also prompt new exploration efforts into different sets of useful multimodal features). Also, further exploration is needed in applying more complex algorithms (e.g., deep/recurrent neural networks), which have recently shown promising capabilities in dealing with rich multimodal data (e.g., [24]). …”
Section: Teacher Activitymentioning
confidence: 99%