2016
DOI: 10.1504/ijmic.2016.075814
|View full text |Cite
|
Sign up to set email alerts
|

Structure optimisation of input layer for feed-forward NARX neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…A thorough survey of literature suggests that for the prediction of time series, dynamical neural networks are most efficient as they can be trained and tuned to predict time-dependent data. Amongst the various developments of dynamical neural networks, the Non-linear AutoRegressive model with exogenous inputs (NARX) neural network has gained great popularity in the research community [154][155][156][157][158][159].…”
Section: Artificial Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…A thorough survey of literature suggests that for the prediction of time series, dynamical neural networks are most efficient as they can be trained and tuned to predict time-dependent data. Amongst the various developments of dynamical neural networks, the Non-linear AutoRegressive model with exogenous inputs (NARX) neural network has gained great popularity in the research community [154][155][156][157][158][159].…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…Then the number of neurons in the hidden layer of the neural network were optimized until no further improvement was achieved. In order to set the input and feedback delays, a correlation analysis was performed on the data, and then through a trial and error procedure, the best performing delays were selected for each model [154,157]. It should be noted that artificial neural networks often suffer the two problems of overfitting and premature convergence to local solutions.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…Lagged variables are generated by a number of linear time-delayed input terms or past values, normally in ascending order, such as P(t−n)… P(t−2), P(t−1), to estimate the output value P(t) [34].…”
Section: Lagged Variable Sizementioning
confidence: 99%
“…ANN can achieve superb performance, eg in [9], but as with most black-box methods, do not give any insight into the virtually unknown model that has been identified. A comparison between black-box and grey-box identification in the automotive field can be found in [10], where Savaresi et al have successfully identified magnetorheological damper models through both a nonlinear semiphysical model and a Nonlinear AutoRegressive eXogenous (NARX) structure.…”
Section: *Corresponding Author: Department Of Aeronautical and Automomentioning
confidence: 99%
“…In recent times, the approach of trying to reproduce the mechanisms of human learning through artificial neural network methods has become increasingly popular. Artificial neural networks can achieve a superb performance (see, for example, the paper by Li and Best 9 ) but, as with most black-box methods, they do not give any insight into the virtually unknown model that has been identified. A comparison between black-box identification and grey-box identification in the automotive field can be found in the work by Savaresi et al, 10 who successfully identified magnetorheological damper models, using both a non-linear semiphysical model and a non-linear autoregressive exogenous (NARX) structure.…”
Section: Introductionmentioning
confidence: 99%