1990 American Control Conference 1990
DOI: 10.23919/acc.1990.4791170
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Network Structure for System Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

1991
1991
2018
2018

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 42 publications
(12 citation statements)
references
References 1 publication
0
12
0
Order By: Relevance
“…Three models were created based on these data sets: A continuous linear model with first-order elements, a nonlinear ANN-ARMA model of the type found in the literature Bhat etal. 1990;Donat et al 1990;Haesloop and Holt 1990;Jones et al 1989;Jordan and Jacobs 1990;Narenda and Parthasarathy 1990;Pineda 1989;Waibel 1989), and a nonlinear ANN-CT model of the architecture described above. For each of the network models, results are averaged over 10 runs each.…”
Section: ___mentioning
confidence: 98%
See 1 more Smart Citation
“…Three models were created based on these data sets: A continuous linear model with first-order elements, a nonlinear ANN-ARMA model of the type found in the literature Bhat etal. 1990;Donat et al 1990;Haesloop and Holt 1990;Jones et al 1989;Jordan and Jacobs 1990;Narenda and Parthasarathy 1990;Pineda 1989;Waibel 1989), and a nonlinear ANN-CT model of the architecture described above. For each of the network models, results are averaged over 10 runs each.…”
Section: ___mentioning
confidence: 98%
“…The method of Haesloop and Holt uses linear information in the form of direct links from the input layer to the output layer, in addition to the standard pathways for activation through the hidden layers (Haesloop and Holt 1990). In this case, the output of the network would be calculated as the sum of the linear model and the nonlinear network, which is trained to account for only the residual nonlinear behavior .…”
Section: Related Workmentioning
confidence: 99%
“…By using the chain rule recursively [14]- [17], the parameters are adjusted in the point n of training samples. …”
Section: B the Training Of The Hf Elman Wnn Modelmentioning
confidence: 99%
“…By using the direct term, the SRWNN has advantages of a direct linear feedthrough network such as the initialization of network parameters based on process knowledge and enhanced extrapolation outside of examples of the learning data sets [9]. W is the weighting vector of SRWNN represented by:…”
Section: Description Of the Srwnn Structurementioning
confidence: 99%