2010
DOI: 10.1109/tac.2010.2046053
|View full text |Cite
|
Sign up to set email alerts
|

Design of Affine Controllers via Convex Optimization

Abstract: Abstract-We consider a discrete-time time-varying linear dynamical system, perturbed by process noise, with linear noise corrupted measurements, over a finite horizon. We address the problem of designing a general affine causal controller, in which the control input is an affine function of all previous measurements, in order to minimize a convex objective, in either a stochastic or worst-case setting. This controller design problem is not convex in its natural form, but can be transformed to an equivalent con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
90
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 126 publications
(90 citation statements)
references
References 61 publications
0
90
0
Order By: Relevance
“…It has been originally advocated in the context of stochastic programming (see Charnes et al [15], Garstka and Wets [20], and references therein), where such policies are known as decision rules. More recently, the idea has received renewed interest in robust optimization (Ben-Tal et al [7]), and has been extended to linear systems theory (Ben-Tal et al [4,5]), with notable contributions from researchers in robust model predictive control and receding horizon control (see Löfberg [27], Bemporad et al [1], Kerrigan and Maciejowski [25], Skaf and Boyd [32], and references therein). In all the papers, which usually deal with the more general case of multidimensional linear systems, the authors typically restrict attention, for purposes of tractability, to the class of disturbance-affine policies, and show how the corresponding policy parameters can be found by solving specific types of optimization problems, which vary from linear and quadratic programs (Ben-Tal et al [4], Kerrigan and Maciejowski [24,25]) to conic and semidefinite (Löfberg [27], Ben-Tal et al [4]), or even multiparametric, linear, or quadratic programs (Bemporad et al [1]).…”
Section: Problem 1 Consider a One-dimensional Discrete-time Linearmentioning
confidence: 99%
“…It has been originally advocated in the context of stochastic programming (see Charnes et al [15], Garstka and Wets [20], and references therein), where such policies are known as decision rules. More recently, the idea has received renewed interest in robust optimization (Ben-Tal et al [7]), and has been extended to linear systems theory (Ben-Tal et al [4,5]), with notable contributions from researchers in robust model predictive control and receding horizon control (see Löfberg [27], Bemporad et al [1], Kerrigan and Maciejowski [25], Skaf and Boyd [32], and references therein). In all the papers, which usually deal with the more general case of multidimensional linear systems, the authors typically restrict attention, for purposes of tractability, to the class of disturbance-affine policies, and show how the corresponding policy parameters can be found by solving specific types of optimization problems, which vary from linear and quadratic programs (Ben-Tal et al [4], Kerrigan and Maciejowski [24,25]) to conic and semidefinite (Löfberg [27], Ben-Tal et al [4]), or even multiparametric, linear, or quadratic programs (Bemporad et al [1]).…”
Section: Problem 1 Consider a One-dimensional Discrete-time Linearmentioning
confidence: 99%
“…The second part of Proposition 3.1 provides a method for optimizing over affine output feedback policies u = Uy by re-parameterizing the problem into one of optimizing over a different but equivalent class of parametric policies, for which u and x become affine functions of the parameters [4,15,24]. The resulting optimization problem can then be solved using Proposition 3.2.…”
Section: Proposition 32mentioning
confidence: 99%
“…An attractive feature of such affine parameterizations is that they can be shown to be equivalent (in the state feedback case) to parameterizations of control policies as affine functions of prior states [16,24], or (in the output feedback case) as affine functions of prior measurements [15,4]. The idea underpinning these equivalence results is akin to that of the well-known Youla parameterization (or Q-parameterization) in linear systems [28], and relies on a similar nonlinear transformation to produce a convex set of constraint admissible policies over which one can optimize.…”
Section: Introductionmentioning
confidence: 99%
“…We solve the design problem using the sampling method, with (training) samples, and verify the results by simulation on another (validation) set of 10000 samples. We compare the performance of the nonlinear controller found using our method to the optimal linear controller, designed using the same training set (see [53]), and the certainty-equivalent model predictive control (CE-MPC). We also show the results obtained with a prescient controller, i.e., a controller that is not causal (which, of course, gives us a lower bound on achievable performance).…”
Section: Examplementioning
confidence: 99%