2019
DOI: 10.1007/s00371-019-01636-3
|View full text |Cite
|
Sign up to set email alerts
|

Deep convolutional BiLSTM fusion network for facial expression recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
35
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 72 publications
(36 citation statements)
references
References 32 publications
1
35
0
Order By: Relevance
“…Table 7 illustrates the comparison of previous work. The result of our model on CK+ and MMI databases achieves relatively same accuracy obtained by Liang et al [54] and Zhang et al [56]. Compared to other databases, the recognition performance shows a significant degradation on SFEW database due to large variations of pose and illumination.…”
Section: Experimental Analysissupporting
confidence: 75%
“…Table 7 illustrates the comparison of previous work. The result of our model on CK+ and MMI databases achieves relatively same accuracy obtained by Liang et al [54] and Zhang et al [56]. Compared to other databases, the recognition performance shows a significant degradation on SFEW database due to large variations of pose and illumination.…”
Section: Experimental Analysissupporting
confidence: 75%
“…The Bi-LSTM [43] model consists of forward LSTM and reverse LSTM, which are used to extract forward and reverse context features respectively. For the input x t at time t, the hidden states obtained by the forward LSTM and the reverse LSTM are h t and h t , respectively.…”
Section: Crf Layermentioning
confidence: 99%
“…There are also studies conducted with CNN architecture in the FER field [37][38][39][40][41][42][43][44][45][46]. FER studies consist of three phases.…”
Section: Introductionmentioning
confidence: 99%
“…In this network consisting of 3 parts, Deep Spatial Network (DSN) is used for spatial display features, Deep Temporal Network (DTN) is used to benefit from temporal information, and BiLSTM network is used for the formation of six basic expressions. Comparative experiments with 13 different methods on CK+, Oulu-CASIA and MMI datasets showed that the proposed method can achieve high performance[42].Bargal et al proposed an architecture to classify 8 emotional expressions (including contempt). The Acted Facial Expressions in the Wild (AFEW) 6.0 cropped video dataset combinations were used to train the architecture.…”
mentioning
confidence: 99%