2017
DOI: 10.1007/s11042-017-5105-z
|View full text |Cite
|
Sign up to set email alerts
|

MRMR-based ensemble pruning for facial expression recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 27 publications
(20 citation statements)
references
References 42 publications
0
20
0
Order By: Relevance
“…With only a few exceptions [1,32,33], most of the recent works on facial expression recognition are based on deep learning [2,9,10,13,14,17,21,22,24,23,26,28,38,39,40]. Some of these recent works [14,17,21,38,39] proposed to train an ensemble of convolutional neural networks for improved performance, while others [6,16] combined deep features with handcrafted features such as SIFT [25] or Histograms of Oriented Gradients (HOG) [8]. While most works studied facial expression recognition from static images, some works tackled facial expression recognition in video [13,16].…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…With only a few exceptions [1,32,33], most of the recent works on facial expression recognition are based on deep learning [2,9,10,13,14,17,21,22,24,23,26,28,38,39,40]. Some of these recent works [14,17,21,38,39] proposed to train an ensemble of convolutional neural networks for improved performance, while others [6,16] combined deep features with handcrafted features such as SIFT [25] or Histograms of Oriented Gradients (HOG) [8]. While most works studied facial expression recognition from static images, some works tackled facial expression recognition in video [13,16].…”
Section: Related Workmentioning
confidence: 99%
“…Table 1 includes the results of our combined models, one based on global SVM and another based on local SVM, on Table 1: Results on the FER 2013 [11], the FER+ [2] and the AffectNet [27] data sets. Our combination based on pre-trained, fine-tuned and handcrafted models, with and without data augmentation (aug.), are compared with several state-of-the-art approaches [2,6,14,15,17,21,23,27,34,39,40], which are listed in temporal order. The best result on each data set is highlighted in bold.…”
Section: Implementation Detailsmentioning
confidence: 99%
See 3 more Smart Citations