2020
DOI: 10.1609/aaai.v34i04.6037
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Facial Feature Learning with Wide Ensemble-Based Convolutional Neural Networks

Abstract: Ensemble methods, traditionally built with independently trained de-correlated models, have proven to be efficient methods for reducing the remaining residual generalization error, which results in robust and accurate methods for real-world applications. In the context of deep learning, however, training an ensemble of deep networks is costly and generates high redundancy which is inefficient. In this paper, we present experiments on Ensembles with Shared Representations (ESRs) based on convolutional networks … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
65
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 126 publications
(66 citation statements)
references
References 24 publications
1
65
0
Order By: Relevance
“…We outperform their result by almost 6% as reported in Table 6. We also outperform the results reported in Miao et al [43], Li et al [44], and Siqueira et al [26] which employ different type of complex neural networks to learn facial expressions. Table 7 reports the experiments of the different versions of the FaceChannel on the FABO dataset.…”
Section: Fer+supporting
confidence: 61%
See 2 more Smart Citations
“…We outperform their result by almost 6% as reported in Table 6. We also outperform the results reported in Miao et al [43], Li et al [44], and Siqueira et al [26] which employ different type of complex neural networks to learn facial expressions. Table 7 reports the experiments of the different versions of the FaceChannel on the FABO dataset.…”
Section: Fer+supporting
confidence: 61%
“…We observe that the network provides an stable classification over all the classes, including the ones with less examples such as the "Surprise" class. [36] 84.98% SHCNN [43] 86.54 TFE-JL [44] 84.3 ESR-9 [26] 87.15 FaceChannel 90.50% Temporal normalization [45] 66.50 Bag of words [45] 59.00 SVM [46] 32.49 Adaboost [46] 35.22 Face channel 80.54…”
Section: Fabomentioning
confidence: 99%
See 1 more Smart Citation
“…to perform facial emotion recognition estimation, achieving good results. Siqueira et al [33] uses ensembles with shared representations (ESRs) based on convolutional networks for facial emotion recognition. It can be seen that the RMSE results of ours are better than the Reference [33].…”
Section: Framework Performance Experimental Results Arementioning
confidence: 99%
“…The comparison results of recognition accuracy on the Fer2013, CK+, FER+ and RAF data sets are listed in Table IV, Table V, Table VI and Table VII, respectively. [55] 68.79% DenseNet [25] 71.02% GoogLeNet [25] 65.76% VGG-Face [25] 69.18% Ref [51] 61.86% Ref [29] 65.03% Ref [30] 68% Ref [31] 66% Ref [34] 70.02% AlexNet [42] 66.67% VGGNet [42] 69.41% ResNet [42] 70.74% MBCC-CNN(ours) 71.52% [46] 97.02% Ref [53] 94.82% Ref [23] 90.48% Ref [28] 97.38% Ref [51] 97.35% Ref [33] 94.67% Ref [34] 98% WMCNN-LSTM [36] 97.50% Ref [35] 87.20% Ref [54] 96.46% Ref [37] 98.38% Ref [44] 96.15% Ref [47] 96.46% Ref [61] 96.28% MBCC-CNN(ours) 98.48% TableVI THE ACCURACY COMPARISON RESULTS ON THE FER+ DATA SET Method Accuracy DenseNet [62] 86.54% Ref [63] 85.67% Ref [64] 87.15% Ref [65] 85.10% Ref [66] 82.00% Ref [67] 84.29% Ref [68] 87.76% ResNet50 (transfer learning) [69] 79.90% MBCC-CNN(ours)…”
Section: F Comparison Of Recognition Accuracy With Other Methodsmentioning
confidence: 99%