2024
DOI: 10.1109/tac.2023.3327937
|View full text |Cite
|
Sign up to set email alerts
|

An Incremental Input-to-State Stability Condition for a Class of Recurrent Neural Networks

William D'Amico,
Alessio La Bella,
Marcello Farina

Abstract: This paper proposes a novel sufficient condition for the incremental input-to-state stability of a class of recurrent neural networks (RNNs). The established condition is compared with others available in the literature, showing to be less conservative. Moreover, it can be applied for the design of incremental input-to-state stable RNN-based control systems, resulting in a linear matrix inequality constraint for some specific RNN architectures. The formulation of nonlinear observers for the considered system c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…□ δISS conditions and incremental Lyapunov functions that satisfy (3) can be found for the main RNN architectures (LSTM [8], GRU [9], [12], REN [15], and other particular classes of RNN [10]). Moreover, the property of local Lipschitzianity is satisfied by quadratic functions, that are the most common types of Lyapunov functions used to study RNN stability.…”
Section: Problem Formulationmentioning
confidence: 99%
See 2 more Smart Citations
“…□ δISS conditions and incremental Lyapunov functions that satisfy (3) can be found for the main RNN architectures (LSTM [8], GRU [9], [12], REN [15], and other particular classes of RNN [10]). Moreover, the property of local Lipschitzianity is satisfied by quadratic functions, that are the most common types of Lyapunov functions used to study RNN stability.…”
Section: Problem Formulationmentioning
confidence: 99%
“…for any input sequence in U, ∥x k − xk ∥ → 0 for k → ∞. □ Observers respecting Assumption 2 have been designed for the main RNN architectures (LSTM [8], GRU [13], REN [15] and other particular classes of RNN [10]).…”
Section: A Observermentioning
confidence: 99%
See 1 more Smart Citation