2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489123
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning for Illumination Invariant Facial Expression Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…Autoencoders learn an encoder function f that maps an input image x to a hidden representation h = f (x), and learn a function g that maps h to a reconstruction y = g(f (x)) where y is an approximation of x. However, in recent works, [5] it has been established that the target reconstruction does not need to be the same as the input x to the autoencoder. This is supported by the theory that to be useful, an autoencoder should only learn an approximation of the target reconstruction and not an identity function that replicates it [6].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Autoencoders learn an encoder function f that maps an input image x to a hidden representation h = f (x), and learn a function g that maps h to a reconstruction y = g(f (x)) where y is an approximation of x. However, in recent works, [5] it has been established that the target reconstruction does not need to be the same as the input x to the autoencoder. This is supported by the theory that to be useful, an autoencoder should only learn an approximation of the target reconstruction and not an identity function that replicates it [6].…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, since GANs are known to be difficult to train due to their sensitivity to hyper-parameters and parameter initialization which often leads to mode collapse, the GASCA model is trained in a GLW fashion. However, since the greedy nature of GLW leads to error accumulation as individual layers are trained and stacked [5], we build on the gradual greedy layer-wise training algorithm from [5] and adapt it for adversarial autoencoders. Accordingly, we introduce the GAN gradual greedy layer-wise (GANGGLW) training framework and formally define it in Algorithm 1.…”
Section: Generative Adversarial Stacked Convolutional Autoencodersmentioning
confidence: 99%
See 1 more Smart Citation
“…Ruiz-Garcia et.al [11] proposed a deep Stacked Convolutional Autoencoder (SCAE) method. It is trained to reconstruct the images with different illumination to the mean luminance.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, the main aim of facial expression recognition methods and approaches is to enable machines to automatically estimate the emotional content of human face. In intelligent tutoring systems, emotions and learning are inextricably bound together; so recognizing learners' emotional states could significantly improve the efficiency of the learning procedures delivered to them [9]- [11].…”
Section: Introductionmentioning
confidence: 99%