2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00532
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Filter Size in Convolutional Neural Networks for Facial Action Unit Recognition

Abstract: Recognizing facial action units (AUs) during spontaneous facial displays is a challenging problem. Most recently, Convolutional Neural Networks (CNNs) have shown promise for facial AU recognition, where predefined and fixed convolution filter sizes are employed. In order to achieve the best performance, the optimal filter size is often empirically found by conducting extensive experimental validation. Such a training process suffers from expensive training cost, especially as the network becomes deeper.This pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
75
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 90 publications
(75 citation statements)
references
References 38 publications
0
75
0
Order By: Relevance
“…Very recently, Han et al [52] propose to use boosting to select discriminative neurons for facial action unit classification. They employ decision stumps on top of single neurons as weak learners, and learn weighting factors for each of these neurons by offline AdaBoost [42] applied to each mini-batch separately.…”
Section: Boosting For Cnnsmentioning
confidence: 99%
“…Very recently, Han et al [52] propose to use boosting to select discriminative neurons for facial action unit classification. They employ decision stumps on top of single neurons as weak learners, and learn weighting factors for each of these neurons by offline AdaBoost [42] applied to each mini-batch separately.…”
Section: Boosting For Cnnsmentioning
confidence: 99%
“…While not yet particularly popular, research efforts have already been devoted to investigate the advantages of such irregular CNNs [53]. Our HCNN would facilitate such possible future research endeavors more friendly.…”
Section: Discussionmentioning
confidence: 99%
“…Although these results are not directly comparable because of different experimental settings, they indicate that our method trained with labels at sequence-level and a small portion of labelled frames can still show competitive performance. It is known that Supervised Deep Learning models require a large number of samples to be effectively trained [49]. Thus, this still limits their application to Facial Behavior Analysis, where the annotation process is laborious and labelled data is scarce.…”
Section: Conclusion and Discussionmentioning
confidence: 99%