2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7966040
|View full text |Cite
|
Sign up to set email alerts
|

Stacked deep convolutional auto-encoders for emotion recognition from facial expressions

Abstract: Emotion recognition is critical for everyday living and is essential for meaningful interaction. If we are to progress towards human and machine interaction that is engaging the human user, the machine should be able to recognise the emotional state of the user. Deep Convolutional Neural Networks (CNN) have proven to be efficient in emotion recognition problems. The good degree of performance achieved by these classifiers can be attributed to their ability to self-learn a down-sampled feature vector that retai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(31 citation statements)
references
References 16 publications
0
31
0
Order By: Relevance
“…Then, they introduce the similarity preservation term into the supervised auto-encoder to extract robust representations for single sample per person face recognition. The work [23] employs the stacked auto-encoder to pre-train the weights of the deep CNN and improves the performance of the facial emotion recognition. Xu et al [24] use two shallow neural networks to connect two auto-encoders to deal with the age-invariant face recognition and retrieval problems.…”
Section: B Feature Representation Based On Auto-encodesmentioning
confidence: 99%
“…Then, they introduce the similarity preservation term into the supervised auto-encoder to extract robust representations for single sample per person face recognition. The work [23] employs the stacked auto-encoder to pre-train the weights of the deep CNN and improves the performance of the facial emotion recognition. Xu et al [24] use two shallow neural networks to connect two auto-encoders to deal with the age-invariant face recognition and retrieval problems.…”
Section: B Feature Representation Based On Auto-encodesmentioning
confidence: 99%
“…At the fully connected layer, the output unit activation of the network made by softmax function which calculates the probability distribution of K different possible outcomes. After training, the network uses the cross entropy to indicate the distance between the experimental output and the expected output [20].…”
Section: Convolutional Neural Network Based Fermentioning
confidence: 99%
“…The most popular supervised machine learning algorithms [17] are: Support vector machine (SVM) [18], knearest neighbors (KNN) [19], Convolutional Neural Networks (CNN) [20]. However, generally these methods could require a large number of attempts to move towards the best possible recognition performance [21].…”
Section: Related Workmentioning
confidence: 99%