2023
DOI: 10.1016/j.rineng.2023.101029
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing the impact of activation functions on the performance of the data-driven gait model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…x denotes input data in the function, while e denotes a constant value that has a value of 2.71828 [39]. As previously described, the sigmoid is suitable for binary classification cases, where the value range of the sigmoid activation function is from 0 to 1 [40].…”
Section: Deep Learning (Dl)mentioning
confidence: 99%
“…x denotes input data in the function, while e denotes a constant value that has a value of 2.71828 [39]. As previously described, the sigmoid is suitable for binary classification cases, where the value range of the sigmoid activation function is from 0 to 1 [40].…”
Section: Deep Learning (Dl)mentioning
confidence: 99%
“…The different activation functions used include Sigmoid, Rectified Linear Unit (ReLU), hyperbolic tangent (Tanh), Leaky ReLU, Exponential Linear Unit ELU), and Softmax. ReLU is a commonly used activation function [42,43]. The output layer is the final layer of the artificial neural network (ANN) and computes the final prediction.…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%
“…The dropout layer then follows a dense layer, which is composed of 256 neurons, and is activated with the SeLU activation function. SeLU is a simple yet effective activation function developed to address the dying ReLu problem [55]. The final layer consists of two and five neurons, with softmax activation for binary and multi-class classifications, respectively.…”
Section: Teacher Modelmentioning
confidence: 99%