2020 43rd International Conference on Telecommunications and Signal Processing (TSP) 2020
DOI: 10.1109/tsp49548.2020.9163446
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Evaluation of Activation Functions in Deep Convolution Neural Network for Facial Expression Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 33 publications
(18 citation statements)
references
References 12 publications
0
18
0
Order By: Relevance
“…A sigmoid function is applied to the output of the last layer to get a probability value. The activation in the MLP was Leaky ReLU (19). A Dropout rate of 0.1 was set between layers.…”
Section: Mlp Classifiersmentioning
confidence: 99%
“…A sigmoid function is applied to the output of the last layer to get a probability value. The activation in the MLP was Leaky ReLU (19). A Dropout rate of 0.1 was set between layers.…”
Section: Mlp Classifiersmentioning
confidence: 99%
“…The structure of the neural network used in our experiment is a structure with four hidden layers, each hidden layer is composed of n neurons, where n is grid searched from the value set f64; 128; 256g, each hidden layer is activated with ReLU function. 42 In the output layer, the SoftMax activation function is performed to get the probabilistic prediction. Learning rate, which is an important parameter to the final predictive performance, is determined by grid searching from the space of f10 −4 ; 10 −3 ; 10 −2 g.…”
Section: Resultsmentioning
confidence: 99%
“…The model consists of six bidimensional convolutional layers in which the stride is fixed at 2 in each dimension to downsample the feature maps size after each convolution. The activation function of the first five layers is the LeakyReLu [ 52 ] to avoid blockage during training due to negative values in the input signals. In layers 2 to 5, batch normalization is also applied over every instance.…”
Section: Methodsmentioning
confidence: 99%