2021
DOI: 10.1109/lra.2021.3098944
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modal Representation Learning for Lightweight and Accurate Facial Action Unit Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 42 publications
0
4
0
Order By: Relevance
“…Since then, AUs have become a central component of many applications, including human activity recognition and behavior understanding [15], facial expression recognition (FER) [16], video games [17], car driver attention monitoring systems [18] and remote health monitoring [19]. AUs are combinations of facial muscle movements and are the basic components of facial expressions [20]. The development of AU detection systems has been a longstanding challenge in artificial intelligence, with early approaches relying on classical methods such as Gabor filters, principal component analysis (PCA) [21], Support Vector Machines (SVM) [3], and k-Nearest Neighbor classifiers (KNN) [22].…”
Section: A Background and Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Since then, AUs have become a central component of many applications, including human activity recognition and behavior understanding [15], facial expression recognition (FER) [16], video games [17], car driver attention monitoring systems [18] and remote health monitoring [19]. AUs are combinations of facial muscle movements and are the basic components of facial expressions [20]. The development of AU detection systems has been a longstanding challenge in artificial intelligence, with early approaches relying on classical methods such as Gabor filters, principal component analysis (PCA) [21], Support Vector Machines (SVM) [3], and k-Nearest Neighbor classifiers (KNN) [22].…”
Section: A Background and Related Workmentioning
confidence: 99%
“…Moreover, to remind that the main objective of this work is to propose a lightweight model with the minimum number of parameters for AU detection in resource-constrained systems. Zhang [5] VGGNet >138 FSNet [20] ResNet-50(customized) 8,19 ARL [25] VGGNet >138 STRAL [30] VGGNet >138 LGRNet [78] BiLSTM >4 MCFE [79] DENSnet-121 >3 CWCF [80] ResNet-9 >26 IDENnet [81] LightCNN >6,572 JAU [82] Resnet-18 >11 This work Attention-CNN 1,5…”
Section: B Model Sizementioning
confidence: 99%
See 1 more Smart Citation
“…According to Facial Action Coding System (FACS) (Friesen and Ekman 1978), facial action units (AUs), defined as the combinations of facial muscle movements, can describe almost all facial behaviors, which is essential for fine-grained facial behavior analysis. In recent years, deep learning has proved its efficacy and efficiency in facial action unit recognition task (Cui et al 2020;Chen et al 2021b;Yang et al 2021;Song et al 2021b), but there is still room for improvement since some inherent nature of AU has not been fully exploited.…”
Section: Introductionmentioning
confidence: 99%