2020
DOI: 10.1016/j.patrec.2019.12.013
|View full text |Cite
|
Sign up to set email alerts
|

Facial expression recognition based on deep convolution long short-term memory networks of double-channel weighted mixture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 70 publications
(31 citation statements)
references
References 18 publications
0
31
0
Order By: Relevance
“…We incorporate LSTM in the network architecture. LSTM has been widely used in the task of facial expression and action recognition [29][30][31][32]. Zhang et al [31] combined the time and texture information of image sequences by Total paralysis Ⅲ…”
Section: Related Workmentioning
confidence: 99%
“…We incorporate LSTM in the network architecture. LSTM has been widely used in the task of facial expression and action recognition [29][30][31][32]. Zhang et al [31] combined the time and texture information of image sequences by Total paralysis Ⅲ…”
Section: Related Workmentioning
confidence: 99%
“…In [35], the concept of attention is introduced to the first layer of the convolutional neural network to perform convolution calculations of ROIs. In [36][37], a multichannel convolutional neural network was used for feature fusion. In [36], a dual-channel weighted hybrid deep convolutional neural network (WMDCNN) based on static images and a dual-channel weighted hybrid deep long short-term memory network based on image sequences (WMCNN-LSTM) were proposed.…”
Section: B Convolutional Neural Network Methodsmentioning
confidence: 99%
“…Although these approaches are effective methods in extracting spatial information, they fail to capture morphological and contextual variations in the expression process. Recent methods aim to solve this problem by using massive datasets to obtain more efficient features of FER [9][10][11][12][13][14][15]. Some researchers use multimodal fusion to recognize emotions, such as voices, expressions, and actions [16].…”
Section: Related Workmentioning
confidence: 99%