1998
DOI: 10.1109/72.701180
|View full text |Cite
|
Sign up to set email alerts
|

A hybrid linear/nonlinear training algorithm for feedforward neural networks

Abstract: This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0
2

Year Published

1999
1999
2017
2017

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 134 publications
(55 citation statements)
references
References 20 publications
0
53
0
2
Order By: Relevance
“…In all cases active power measurements are recorded for applied voltages in the range -12% to +14% of the nominal voltage of 230 V in increments of 2%. The MLP networks are trained using the BFGS training algorithm [10] with leave-one-out cross validation used to optimize the number of neurons, ℎ . The results obtained for the three loads are presented in rows (a), (b) and (c) of Fig.…”
Section: Resultsmentioning
confidence: 99%
“…In all cases active power measurements are recorded for applied voltages in the range -12% to +14% of the nominal voltage of 230 V in increments of 2%. The MLP networks are trained using the BFGS training algorithm [10] with leave-one-out cross validation used to optimize the number of neurons, ℎ . The results obtained for the three loads are presented in rows (a), (b) and (c) of Fig.…”
Section: Resultsmentioning
confidence: 99%
“…There are various configurations and structures of NNs, but all contain an array of neurons that are linked together, usually in multiple layers. In this application a single hidden layer Multilayer Perceptron (MLP) topology is chosen because of its universal function approximation capabilities, good generalisation properties and the availability of robust efficient training algorithms [12]. The output of a single hidden layer MLP can be written as a linear combination of sigmoid functions (i.e.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…MLP training was performed using the hybrid BFGS training algorithm [12] with stopped minimisation used to prevent over-fitting [13]. The optimum number of neurons (M ) was determined for each model by systematically evaluating different network sizes and selecting the network with the minimum MSE on the test data set.…”
Section: Mlp Trainingmentioning
confidence: 99%
“…The inevitable model errors will definitely lower the control accuracy and robust performance of control system, may even lead to system instability [1]. Therefore, it is not only need to improve the accuracy of prediction model but also need to take additional necessary measures to supplement the lack of model predictions.…”
Section: Introductionmentioning
confidence: 99%