2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN) 2018
DOI: 10.1109/icacccn.2018.8748660
|View full text |Cite
|
Sign up to set email alerts
|

Comparative study of various training algorithms of artificial neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 69 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…A typical numerical optimization approach can be used to optimize the performance function of multilayer feed-forward networks during training [29]. The various training algorithms that are available in the Deep Learning Toolbox software and that use gradient-or Jacobian-based methods are explained in the following section [30]. Various methods are available to stop training.…”
Section: Training Algorithmsmentioning
confidence: 99%
“…A typical numerical optimization approach can be used to optimize the performance function of multilayer feed-forward networks during training [29]. The various training algorithms that are available in the Deep Learning Toolbox software and that use gradient-or Jacobian-based methods are explained in the following section [30]. Various methods are available to stop training.…”
Section: Training Algorithmsmentioning
confidence: 99%
“…The algorithm uses the advantage of both methods and chooses the one according to closeness to the optimal value. The iterative training algorithm makes LM faster than other training algorithms [35].…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…Finally, to train (or optimize) the ANN in order to update the weights and to minimize the approximation error (17) (more, precisely, its error norm, the Euclidean distance ( 20)), a training algorithm must be chosen [68]. Very often, the Gradient descent method is applied to update the weights by using the negative and by some 0 < η < 1 scaled gradient of the error norm [63], Section 10.4.4, i.e.,…”
Section: Preliminaries: Data Sets Performance Measures and Training A...mentioning
confidence: 99%