2022
DOI: 10.48550/arxiv.2211.09088
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online convex optimization for constrained control of linear systems using a reference governor

Abstract: In this work, we propose a control scheme for linear systems subject to pointwise in time state and input constraints that aims to minimize time-varying and a priori unknown cost functions. The proposed controller is based on online convex optimization and a reference governor. In particular, we apply online gradient descent to track the time-varying and a priori unknown optimal steady state of the system. Moreover, we use a λ-contractive set to enforce constraint satisfaction and a sufficient convergence rate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…In cases where a window of predictions is available, a receding horizon gradient descent algorithm is suggested in [3] with a dynamic regret analysis. In a more recent line of work [4], the authors introduce a memory based, gradient descent algorithm and in [5], tackle the constrained tracking problem with policy regret guarantees. In [6], the authors analyze the output tracking scheme of an iterative learning controller and provide dynamic and static regret bounds.…”
Section: Introductionmentioning
confidence: 99%
“…In cases where a window of predictions is available, a receding horizon gradient descent algorithm is suggested in [3] with a dynamic regret analysis. In a more recent line of work [4], the authors introduce a memory based, gradient descent algorithm and in [5], tackle the constrained tracking problem with policy regret guarantees. In [6], the authors analyze the output tracking scheme of an iterative learning controller and provide dynamic and static regret bounds.…”
Section: Introductionmentioning
confidence: 99%
“…The LQT problem for sequentially revealed adversarial reference states is studied mostly with policy regret guarantees, with one of the first works [3] suggesting a relatively computationally heavy algorithm. In a more recent line of work [4], the authors introduce a memory-based, gradient descent algorithm and in [5], tackle the constrained tracking problem. Several works also provide dynamic regret guarantees for tracking of unknown targets, however, their settings differ from ours.…”
Section: Introductionmentioning
confidence: 99%