2013 IEEE 8th International Symposium on Intelligent Signal Processing 2013
DOI: 10.1109/wisp.2013.6657493
|View full text |Cite
|
Sign up to set email alerts
|

Performance comparison of ANN training algorithms for classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 3 publications
0
17
0
Order By: Relevance
“…The feed forward neural network is built using of Levenberg-Marquardt training algorithm which is widely used in classification literature [14,15,16].The network architecture used is composed of nine neurons for input layer and one neuron for the output layer. To achieve the paper objectives, www.ijacsa.thesai.org the number of hidden layers and the number of neurons per hidden layer are changed during the training and simulation of the network.…”
Section: Methodsmentioning
confidence: 99%
“…The feed forward neural network is built using of Levenberg-Marquardt training algorithm which is widely used in classification literature [14,15,16].The network architecture used is composed of nine neurons for input layer and one neuron for the output layer. To achieve the paper objectives, www.ijacsa.thesai.org the number of hidden layers and the number of neurons per hidden layer are changed during the training and simulation of the network.…”
Section: Methodsmentioning
confidence: 99%
“…There are several training algorithms for training the neural network weights, the most important being the backpropagation (BP) algorithm, in which the output error starts from the output layer and propagates backwards until it reaches the hidden layer next to the input layer to update the weights. Based on the update strategy, there are different variations of BP, the most common being BP based on gradient descent (GD) [20]. With reference to Figure 1, the GD-based BP algorithm for updating the weights can be summarized by the following equations, assuming a linear activation function for the output layer and nonlinear activation functions for the hidden layers.…”
Section: The DL Controllermentioning
confidence: 99%
“…A backpropagation strategy is used to optimize the NN performance. Scaled conjugate gradient backpropagation (trainscg) is one of the most commonly applied functions with NN and has produced efficient results [27]. Therefore, this technique is selected to be the learning function of the design.…”
Section: Proposed Ensemble Designmentioning
confidence: 99%