2018
DOI: 10.24297/ijct.v17i1.7106
|View full text |Cite
|
Sign up to set email alerts
|

Faster Convergent Artificial Neural Networks

Abstract: Proposed in this paper is a novel fast-convergence algorithm applied  to neural networks (ANNs) with a learning rate based on the eigenvalues of the associated Hessian matrix of the input data.   That is, the learning rate applied to the backpropagation algorithm changes dynamically with the input data used for training.  The best choice of learning rate to converge to an accurate value quickly is derived. This newly proposed fast-convergence algorithm is applied to a traditional multilayer ANN architecture wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 1 publication
0
5
0
Order By: Relevance
“…The final weight matrix [Wij]tr obtained at the convergence of the 20 th training set is then stored. Relevant weight matrix corresponds to a fast-convergence performance of the test ANN as described in [6].…”
Section: Training Phase Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…The final weight matrix [Wij]tr obtained at the convergence of the 20 th training set is then stored. Relevant weight matrix corresponds to a fast-convergence performance of the test ANN as described in [6].…”
Section: Training Phase Resultsmentioning
confidence: 99%
“…Further, it is observed that the convergence rate can be speeded up by using the learning rate ( = max) derived on the basis of Hessian matrix applied to the input data as in [6]. It improves the ANN convergence…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, since it applies the steepest descent method to update the weights, it suffers from a slow convergence rate and may yield suboptimal solutions [3]. Therefore, in this work a procedure is used that increases the rate of convergence [4]. Applying this faster technique results in the number of iterations required to train the net being much smaller.…”
Section: Figurementioning
confidence: 99%