2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.327
|View full text |Cite
|
Sign up to set email alerts
|

Facial Expression Recognition Using Visual Saliency and Deep Learning

Abstract: We have developed a convolutional neural network for the purpose of recognizing facial expressions in human beings. We have fine-tuned the existing convolutional neural network model trained on the visual recognition dataset used in the ILSVRC2012 to two widely used facial expression datasets -CFEE and RaFD, which when trained and tested independently yielded test accuracies of 74.79% and 95.71%, respectively. Generalization of results was evident by training on one dataset and testing on the other. Further, t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 58 publications
(18 citation statements)
references
References 22 publications
0
18
0
Order By: Relevance
“…Other researches proposed to automatically learn the key parts for facial expression. For example, [159] employed a deep multi-layer network [160] to detect the saliency map which put intensities on parts demanding visual attention. And [161] applied the neighbor-center difference vector (NCDV) [162] to obtain features with more intrinsic information.…”
Section: Diverse Network Inputmentioning
confidence: 99%
“…Other researches proposed to automatically learn the key parts for facial expression. For example, [159] employed a deep multi-layer network [160] to detect the saliency map which put intensities on parts demanding visual attention. And [161] applied the neighbor-center difference vector (NCDV) [162] to obtain features with more intrinsic information.…”
Section: Diverse Network Inputmentioning
confidence: 99%
“…Study 1 Mavani et al (2017) trained a convolutional neural network to bypass the FACS process, by removing the need for extracting action units. Their study found an overall test accuracy of 95.71% for their model when trained and tested on the Radboud Faces Database (Langner et al, 2010), but fell to 65.39% when attempting to generalise across datasets.…”
Section: Discrete Emotionsmentioning
confidence: 99%
“…They have considered images from RaFD dataset for their experiment. Using visual saliency and deep learning Mavani et al (2017) developed an FER system. They have used CFEE and RaFD dataset.…”
Section: State-of-the-artmentioning
confidence: 99%
“…Deep sparse auto encoders were the basis of their framework. 2015Viola-Jones face detection, AAM, Gabor filter, PCA, ELM JAFFE 94.00 CK+ 95.00 Ghimire and Lee (2014) HOG, ELM ensemble using bagging JAFFE 94.37 CK+ 97.30 Alphonse and Dejey (2016) Facial points, Gabor filter, PCA, ELM JAFFE 91.40 Rao et al (2015) SURF, gentle AdaBoost RaFD 90.64 Mavani et al (2017) Visual saliency,deep learning RaFD 95.70 Zeng et al (2018) AAM, HOG, PCA, deep sparse auto encoders CK+ 95.79…”
Section: State-of-the-artmentioning
confidence: 99%