2021
DOI: 10.1016/j.ifacol.2021.08.417
|View full text |Cite
|
Sign up to set email alerts
|

Stability of discrete-time feed-forward neural networks in NARX configuration

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 30 publications
(40 citation statements)
references
References 33 publications
0
40
0
Order By: Relevance
“…Figure 1, which correspond to the states of the dynamical system representing the plant under study. A first simple RNN architecture is that of the so-called Neural Nonlinear ARX (NNARX) [49,50], widely employed mainly thanks to their simple structure and training. This architecture makes use of FFNN as regression functions embedded in a NARX setting, and it is trained minimizing the simulation error, thus obtaining a model able to mimic the system's free-run simulation.…”
Section: Nn For Reinforcement Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…Figure 1, which correspond to the states of the dynamical system representing the plant under study. A first simple RNN architecture is that of the so-called Neural Nonlinear ARX (NNARX) [49,50], widely employed mainly thanks to their simple structure and training. This architecture makes use of FFNN as regression functions embedded in a NARX setting, and it is trained minimizing the simulation error, thus obtaining a model able to mimic the system's free-run simulation.…”
Section: Nn For Reinforcement Learningmentioning
confidence: 99%
“…Model (4) can easily be recast in the general form (3). Specifically, (4) corresponds to a discrete-time normal canonical form [49]. Indeed, letting i P t1, ..., N u…”
Section: Nnarx Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…First, gated recurrent networks naturally retain long-term memory of the past trajectories. In contrast, for feed-forward auto-regressive architectures, this memory must be enforced by supplying the past input-output data-points as inputs of the network [24], typically resulting in less accurate long-term learning compared to RNNs [25]. Secondly, since the controller is implemented by a gated recurrent network, which is inherently strictly proper, the issue of the controller's improperness discussed in [24] does not arise.…”
Section: Introductionmentioning
confidence: 99%