1995
DOI: 10.1016/0893-6080(94)00067-v
|View full text |Cite
|
Sign up to set email alerts
|

An efficient constrained learning algorithm with momentum acceleration

Abstract: Abstract--An algorithm for efficient learning in feedforward networks is presented. Momentum acceleration isachieved by solving a constrained optimization problem using nonlinear programming techniques. In particular, minimization of the usual mean square error cost function is attempted under an additional condition for which the purpose is to optimize the alignment of the weight update vectors in successive epochs. The algorithm is applied to several benchmark training tasks (exclusive-or, encoder, multiplex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0
4

Year Published

1999
1999
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 72 publications
(36 citation statements)
references
References 22 publications
0
32
0
4
Order By: Relevance
“…In order to improve the generalization performance and convergent rate, Perantonis et al [19] proposed incorporating a priori information implicit in the issues into the BP algorithm, which facilitates the learning process leading to better solutions. However, the learning algorithm is complicated and the computational requirement is actually large.…”
Section: Connection Weight Constraints From Function Approximation Prmentioning
confidence: 99%
“…In order to improve the generalization performance and convergent rate, Perantonis et al [19] proposed incorporating a priori information implicit in the issues into the BP algorithm, which facilitates the learning process leading to better solutions. However, the learning algorithm is complicated and the computational requirement is actually large.…”
Section: Connection Weight Constraints From Function Approximation Prmentioning
confidence: 99%
“…Neural networks, especially multilayer perceptron network (MLPN), have been used successfully in many fields [14]- [24]. Some examples include using two-layered perceptron network to do the factorization of polynomials with two or multiple variables [16], [17], applying one-layered linear perceptron for the inversion of nonsingular matrices [21] and solving linear equations by neural networks [22], [23], etc.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, our results show attractive generalization ability properties: Compared with on-line BP, ALECO-2 achieved better recognition rates in 4 out of 6 test sets, including substantial improvements in the barc and btimes test sets; m the remaining two test sets, its recognition accuracy was marginally inferior to that of on-line BP. The good generalization ability of ALECO-2 can probably be attributed to the h c t that the cost function is changed monotonically and gradually [5], without the abrupt jumps sometimes involved in learning algorithms which incorporate heuristics in their fbrmulation (including on-line BP). Note that in the same spirit of constrained learning, it is possible to augment ALECO-2 with weight elimination techniques [lo] which will hopefully further improve its generalization ability without adverse effect on its learning speed.…”
Section: Resultsmentioning
confidence: 97%
“…The use of momentum is based on the expectation that bigger weight steps can be achieved by filtering out high fiequency variations of the error surface in the weight space. ALECO-2 is based on the idea of obtaining optimal weight steps by optimizing, at each epoch of the learning algorithm, the euclidean distance between the current and previous epoch weights [5]. In this way, improved learning speed is achieved.…”
Section: Derivation Of Aleco-2mentioning
confidence: 99%
See 1 more Smart Citation