2021
DOI: 10.1002/int.22391
|View full text |Cite
|
Sign up to set email alerts
|

Learning to disentangle emotion factors for facial expression recognition in the wild

Abstract: Facial expression recognition (FER) in the wild is a very challenging problem due to different expressions under complex scenario (e.g., large head pose, illumination variation, occlusions, etc.), leading to suboptimal FER performance. Accuracy in FER heavily relies on discovering superior discriminative, emotion‐related features. In this paper, we propose an end‐to‐end module to disentangle latent emotion discriminative factors from the complex factors variables for FER to obtain salient emotion features. The… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…Finally, reference exemplar-based algorithms based on unsupervised disentanglement learning [107][108][109] are becoming a promising research direction. Compared with only manually changing the attribute vector, this method directly learns the image-to-image translation along with the attributes, and then manipulates these attributes using a simple traversal across regularization dimensions, so that images with more realistic details can be generated.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, reference exemplar-based algorithms based on unsupervised disentanglement learning [107][108][109] are becoming a promising research direction. Compared with only manually changing the attribute vector, this method directly learns the image-to-image translation along with the attributes, and then manipulates these attributes using a simple traversal across regularization dimensions, so that images with more realistic details can be generated.…”
Section: Discussionmentioning
confidence: 99%
“…ResNet [34] 2016 72.40 CPC [36] 2018 71.35 SHCNN [37] 2019 69.10 Fa-Net [38] 2019 71.10 BReG-NeXt-50 [39] 2020 71.53 DisEmoNet [40] 2021 71.72 VGGNet [41] 2021 73.28 Landmark-guided GCNN [35] 2022 73.26 Ours 2022 74.23 FERPLUS Comparison: Table 2 displays the outcomes of a comparison of this paper's approach using the FERPLUS dataset with other state-of-the-art techniques. We compared our model with other CNN methods, such as ResNet+VGG [42], SENet [43], SHCNN [37], RAN [19], VTFF [44], ADC-Net [45], and the latest methods CERN [46] and A-MoblieNet [47].…”
Section: Methodsmentioning
confidence: 99%
“…Year Accuracy (%) gACNN [18] 2018 85.07 APM-VGG [57] 2019 85.17 MA-Net [56] 2020 88.42 DisEmoNet [40] 2020 83.78 RAN [19] 2020 86.90…”
Section: Methodsmentioning
confidence: 99%
“…Motivated by the achievements of emotional conversion in voice [33,34] and face expression [35], we propose the emotional gait conversion approach to transform natural gaits into emotional gaits by separating identity and emotion representations for data augmentation. The contributions of this work can be summarized as follows:…”
Section: Introductionmentioning
confidence: 99%
“…Motivated by the achievements of emotional conversion in voice [33, 34] and face expression [35], we propose the emotional gait conversion approach to transform natural gaits into emotional gaits by separating identity and emotion representations for data augmentation. The contributions of this work can be summarized as follows: We introduce a MTL discriminator for gait identity and emotion joint learning, which takes into account nonverbal communication clues to enhance HRI. We propose a novel emotional gait conversion model with adversarial loss and cycle consistency loss to realize the mutual transformation between natural gait and emotional gait. We propose two kinds of data augmentation strategies by the emotional conversion model to increase the amount and diversity of the existing restricted dataset. We present an augmented synthetic dataset of human emotional gait, validated by a multitask classifier and achieved a corresponding 2.1% and 6.8% absolute increase in identity recognition and emotion recognition, respectively. …”
Section: Introductionmentioning
confidence: 99%