Artificial Higher Order Neural Networks for Modeling and Simulation 2013
DOI: 10.4018/978-1-4666-2175-6.ch006
|View full text |Cite
|
Sign up to set email alerts
|

Fundamentals of Higher Order Neural Networks for Modeling and Simulation

Abstract: In this chapter, the authors provide fundamental principles of Higher Order Neural Units (HONUs) and Higher Order Neural Networks (HONNs) for modeling and simulation. An essential core of HONNs can be found in higher order weighted combinations or correlations between the input variables and HONU. Except for the high quality of nonlinear approximation of static HONUs, the capability of dynamic HONUs for the modeling of dynamic systems is shown and compared to conventional recurrent neural networks when a pract… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 33 publications
(15 citation statements)
references
References 20 publications
0
15
0
Order By: Relevance
“…In next subsection we show QNU and its linear nature of optimization (by L-M algorithm) that, in principle, prevents QNU from local minima issue for a given training data set, so the weight convergence of QNU is superior to conventional perceptron-type neural networks [ 23 , 41 ].…”
Section: Prediction Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In next subsection we show QNU and its linear nature of optimization (by L-M algorithm) that, in principle, prevents QNU from local minima issue for a given training data set, so the weight convergence of QNU is superior to conventional perceptron-type neural networks [ 23 , 41 ].…”
Section: Prediction Methodsmentioning
confidence: 99%
“…The output of QNU from Figure 4 can be written in a vector multiplication form that can be decomposed into a long vector representation as follows: where x 0 = 1 (as shown in Figure 4 ), is a predicted value, x 1 , x 2 ,…, x n are external neural inputs at a sample time k , w i , j are neural weights of QNU, w is a long-vector representation of weight matrix of QNU, and colx is a long column vector of polynomial terms of neural inputs defined as follows: Notice that for weight optimization of polynomial static model ( 11 ), all x i and y ( k + n s ) are substituted with measured training data, so ( 11 ) yields a linear combination of neural weights that has, in principle, a unique solution for a given training data. Thus, contrary to MLP networks, the linear optimization nature of QNU implies that QNU avoids the local minima issue for a given training data while the neural model maintains high quality of nonlinear approximation that we have observed so far [ 23 , 41 ].…”
Section: Prediction Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…n ya − n yb = 12 and also n ua − n ub = 12 (the estimated time constant of this pulverized firing boiler has been specified by experts as approximately 12 minutes). Also, the first layer (6) plays a filtering role due to its step delayed feedback to the network input (4), and its recurrent feedback naturally calls for training by the Backpropagation Through Time method (BPTT) [15][16][17], which is a powerful and efficient and yet practical optimization method, as it can be achieved by a combination of a gradient descent rule and the Levenberg-Marquardt algorithm [18]. The sigmoid function ϕ(.…”
Section: Neural Network For No X Predictionmentioning
confidence: 99%
“…The sigmoid function ϕ(. ), which is usually considered as a main nonlinearity of conventional neural networks, has another importance for this dynamic network, because the major nonlinearity is provided by the QNU [9,10,18]. The sigmoid function ϕ(.)…”
Section: Neural Network For No X Predictionmentioning
confidence: 99%