2019
DOI: 10.1609/aaai.v33i01.33018594
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Relationships Guided Representation Learning for Facial Action Unit Recognition

Abstract: Facial action unit (AU) recognition is a crucial task for facial expressions analysis and has attracted extensive attention in the field of artificial intelligence and computer vision. Existing works have either focused on designing or learning complex regional feature representations, or delved into various types of AU relationship modeling. Albeit with varying degrees of progress, it is still arduous for existing methods to handle complex situations. In this paper, we investigate how to integrate the semanti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
109
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 130 publications
(111 citation statements)
references
References 25 publications
2
109
0
Order By: Relevance
“…There are many works about AU occurrence detection [8,22,23,29], which is considered as a multilabel classification problem. For instance, several works considered the relationships on various AUs and modeled AU interrelations to improve recognition accuracy [4,21,26]. However, most works relied on probabilistic graphical models with manually extracted features [5,53], which limits the extension for deep learning.…”
Section: Facial Au Detectionmentioning
confidence: 99%
“…There are many works about AU occurrence detection [8,22,23,29], which is considered as a multilabel classification problem. For instance, several works considered the relationships on various AUs and modeled AU interrelations to improve recognition accuracy [4,21,26]. However, most works relied on probabilistic graphical models with manually extracted features [5,53], which limits the extension for deep learning.…”
Section: Facial Au Detectionmentioning
confidence: 99%
“…Comparison with single-modal based methods . To prove that multimodal provides additional valuable information for AU detection, we first compare our method to the single modality based methods, including Deep Structure Inference Network (DSIN) [7], Joint AU Detection and Face Alignment (JAA) [31], Optical Flow network (OF-Net) [42], Local relationship learning with Personspecific shape regularization (LP-Net) [29], Semantic Relationships Embedded Representation Learning ( SRERL) [20], and ResNet18.…”
Section: 31mentioning
confidence: 99%
“…Recently, Li et al [10] proposed the AU semantic relationship embedded representation learning (SRERL) framework to combine facial AU detection and Gated Graph Neural Network (GGNN) [12] and achieved good results. But the commonly used Graph Convolutional Network (GCN) for classification task with relation modeling is adopted for AU relation modeling in our proposed method, while the Gated Graph Neural Network (GGNN) adopted in [13] is inspired by GRU and mainly used for the task of Visual Question Answering and Semantic Segmentation.…”
Section: Related Workmentioning
confidence: 99%