1988
DOI: 10.1007/bf00332914
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating the convergence of the back-propagation method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
306
0
16

Year Published

1993
1993
2016
2016

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 979 publications
(347 citation statements)
references
References 8 publications
1
306
0
16
Order By: Relevance
“…Consequently, several researchers have devised modifications to the backpropagation algorithm to increase the convergence rate. The general approach has been to vary the learning rate dynamically during training in order to maintain it at the largest value that will not cause oscillations (13] (2]. Attempts have been made to learn from a subset of the patterns to determine the network size and initialize the weights to reduce training time [12].…”
Section: Introductionmentioning
confidence: 99%
“…Consequently, several researchers have devised modifications to the backpropagation algorithm to increase the convergence rate. The general approach has been to vary the learning rate dynamically during training in order to maintain it at the largest value that will not cause oscillations (13] (2]. Attempts have been made to learn from a subset of the patterns to determine the network size and initialize the weights to reduce training time [12].…”
Section: Introductionmentioning
confidence: 99%
“…In the present study, the time-series forecasts denoted by UWD k (t) were generated using activation functions g(X) described by the logarithmic sigmoid, ψ(X) and the output function, χ(X) equations [VOGL et al 1988] as seen in equations (7) and (8).…”
Section: Theoretical Overview Extreme Learning Machinementioning
confidence: 99%
“…It is a simple, heuristic, strategy for accelerating the BP algorithm that is based on the use of a momentum term (BPM). The second method was suggested by Vogl [31]. It increases the convergence of the BP by adapting the learning rate at each epoch, in such a way that monotone decrease of the error is enforced (VMRZA).…”
Section: Training Mlps Using Back-propagation Algorithmsmentioning
confidence: 99%