Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challeng 2000
DOI: 10.1109/ijcnn.2000.857856
|View full text |Cite
|
Sign up to set email alerts
|

Technique of learning rate estimation for efficient training of MLP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2007
2007
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…is used for the neurons of the hidden ( 2 F ) and output layer ( 3 F ) for the both MLP and RNN models. The standard back-propagation training algorithm [11] with constant or adaptive learning rate [20] is used for the training for both NN models.…”
Section: Neural-based Prediction Methodsmentioning
confidence: 99%
“…is used for the neurons of the hidden ( 2 F ) and output layer ( 3 F ) for the both MLP and RNN models. The standard back-propagation training algorithm [11] with constant or adaptive learning rate [20] is used for the training for both NN models.…”
Section: Neural-based Prediction Methodsmentioning
confidence: 99%
“…Such learning rate can only be obtained for linear and ReLU activation functions. When using the sigmoid activation function, we can only receive approximate expressions for the learning rate using the Taylor series expansion ( Golovko et al, 2000 ; Golovko, 2003 ). Since this is a very complicated problem, as mentioned before, most of the scientists use the steepest descent method together with the line search approach.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, in this paper we investigate the calculation of adaptive learning rate for a RBM, which is based on the steepest descent technique ( Golovko et al, 2000 , 2023 ; Golovko, 2003 ). This approach is based on minimizing the loss function to calculate the adaptive learning step.…”
Section: Introductionmentioning
confidence: 99%
“…The steepest descent method for calculating the adaptive learning rate [4] is used for removing the classical disadvantages of the back propagation error algorithm with empirical choosing of the learning rate.…”
Section: Multi-layer Perceptron Modelmentioning
confidence: 99%