2019 IEEE 58th Conference on Decision and Control (CDC) 2019
DOI: 10.1109/cdc40024.2019.9029489
|View full text |Cite
|
Sign up to set email alerts
|

Local Asymptotic Stability Analysis and Region of Attraction Estimation with Gaussian Processes

Abstract: Determining the region of attraction of nonlinear systems is a difficult problem, which is typically approached by means of Lyapunov theory. State of the art approaches either provide high flexibility regarding the Lyapunov function or parallelizability of computation. Aiming at both, flexibility and parallelizability, we propose a method to obtain a Lyapunov-like function for stability analysis by learning the infinite horizon cost function with a Gaussian process based on approximate dynamic programming. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

4
5

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…Since we do not have any target values for the supervised learning, we cannot directly apply the GP regression approach. Therefore, we approximate the infinite horizon cost Ṽ∞ (x) = ∞ k=1 l(f k (x)), where f k (•) denotes the k-times application of the dynamics f (•) and l : R d → R + is a chosen stage cost, by transforming the Bellman equation at training points into a regression problem as proposed in (Lederer and Hirche, 2019). This is formalized in the following lemma.…”
Section: Learning Nonparametric Control Lyapunov Functionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Since we do not have any target values for the supervised learning, we cannot directly apply the GP regression approach. Therefore, we approximate the infinite horizon cost Ṽ∞ (x) = ∞ k=1 l(f k (x)), where f k (•) denotes the k-times application of the dynamics f (•) and l : R d → R + is a chosen stage cost, by transforming the Bellman equation at training points into a regression problem as proposed in (Lederer and Hirche, 2019). This is formalized in the following lemma.…”
Section: Learning Nonparametric Control Lyapunov Functionsmentioning
confidence: 99%
“…Due to (Lederer and Hirche, 2019), a function satisfying this equation on a finite set of pairs (x, f (x)) can be obtained through noiseless GP regression with the kernel k…”
Section: Learning Nonparametric Control Lyapunov Functionsmentioning
confidence: 99%
“…The Lyapunov Neural Network [21] can incrementally adapt the RoA's shape given an initial safe set. As an alternative, GPs can obtain a Lyapunov-like function [22], or an LF can be synthesized to provide guarantee's on a controller's stability while training [23].…”
Section: Introductionmentioning
confidence: 99%
“…The complexity of systems we encounter prompts a shift from classical parametric techniques in favor of more flexible machine learning techniques (e.g. neural networks or Gaussian processes) for prediction [1,2], model-based control [3,4] and analysis [5,6,7]. Traditionally, representations of systems are in the immediate state-space concerned with "dynamics of states".…”
Section: Introductionmentioning
confidence: 99%