2021
DOI: 10.3390/a14020051
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks

Abstract: The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these functions remarkably enhance the model accuracy. However, there is limited insight into the effects of nonlinearity in neural networks on their performance. Here, we investigate the performance of neural network m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 22 publications
0
15
0
Order By: Relevance
“…The supervised learning of an artificial neural network is a multidimensional optimization problem [ 17 ]. To solve this problem, it is necessary to find a set of weighting coefficients that minimize the error function.…”
Section: Methodsmentioning
confidence: 99%
“…The supervised learning of an artificial neural network is a multidimensional optimization problem [ 17 ]. To solve this problem, it is necessary to find a set of weighting coefficients that minimize the error function.…”
Section: Methodsmentioning
confidence: 99%
“…As indicated in Figure 1, we used the python deep learning APi Keras [22] to build the ANN model, which is a simple, fast, and powerful tool for building ANN models. Activation functions are an important part of the role that ANN may play in dealing with non-linear situations [23]. The rectified linear unit (ReLU) [24] is one of the most often used activation functions in deep neural networks:…”
Section: Artificial Neural Network Modelmentioning
confidence: 99%
“…As with standard neural networks, each neuron in this layer is directly connected to all neurons in the layers below. To introduce non-linearity in the model, the ReLU activation function (Agarap 2018) (Ramachandran et al 2017) is used in this layer, as it has been shown to be effective in deep learning architectures, as discussed in (Agarap 2018) and (Kulathunga et al 2021). One of the major reasons behind the effective performance is its non-saturating…”
Section: Parameter Layermentioning
confidence: 99%