2014
DOI: 10.48550/arxiv.1410.4535
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints

Stefan Streif,
Matthias Karl,
Ali Mesbah

Abstract: This paper presents a stochastic model predictive control approach for nonlinear systems subject to time-invariant probabilistic uncertainties in model parameters and initial conditions. The stochastic optimal control problem entails a cost function in terms of expected values and higher moments of the states, and chance constraints that ensure probabilistic constraint satisfaction. The generalized polynomial chaos framework is used to propagate the timeinvariant stochastic uncertainties through the nonlinear … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 56 publications
(141 reference statements)
0
7
0
Order By: Relevance
“…Since there are no time-varying stochastic nonlinear MPC algorithms that guarantee feasibility for parameters with Gaussian noise, linear MPC was selected for comparison [21], [24]. Linear MPC acts as a baseline to benchmark the performance between the optimal control and reinforcement learning algorithms.…”
Section: B Model Predictive Controlmentioning
confidence: 99%
See 1 more Smart Citation
“…Since there are no time-varying stochastic nonlinear MPC algorithms that guarantee feasibility for parameters with Gaussian noise, linear MPC was selected for comparison [21], [24]. Linear MPC acts as a baseline to benchmark the performance between the optimal control and reinforcement learning algorithms.…”
Section: B Model Predictive Controlmentioning
confidence: 99%
“…Variants of MPC extend to nonlinear systems [15], [16] and account for uncertainty by propagating worstcase outcomes [17]- [19] or using probabilistic constraints [20]. Equations from the unscented transform have been incorporated as MPC constraints in an attempt to improve state estimation [21]- [23]. However, these methods cannot guarantee stability for systems with time-varying parameters following a Gaussian distribution as the possible changes to the system are unbounded and cannot be corrected by an inputconstrained control law [21], [24].…”
Section: Introductionmentioning
confidence: 99%
“…PCEs in this context are a scenario-based SNMPC algorithm that uses leastsquares estimation online for every iteration of inputs to approximate the coefficients of an orthogonal polynomial approximation, known as non-intrusive PCE 32 . For polynomial-type systems Galerkin projection is used instead to determine the coefficients, which is called intrusive PCE 33 . Chance constraints can either be given using Chebychev's inequality 34 or applying a MC sampling approximation on the orthogonal polynomials themselves 33 .…”
Section: Introductionmentioning
confidence: 99%
“…For polynomial-type systems Galerkin projection is used instead to determine the coefficients, which is called intrusive PCE 33 . Chance constraints can either be given using Chebychev's inequality 34 or applying a MC sampling approximation on the orthogonal polynomials themselves 33 . The PCE based SNMPC algorithm has been extended to the case of output feedback in Bradford and Imsland 35 , Bradford et al 36 by combining the approach with a PCE nonlinear state estimator.…”
Section: Introductionmentioning
confidence: 99%
“…A simple solution to SN-MPC can be found by [11], who linearises the nonlinear system successively and then applies a probabilistic tube method. A popular approach in SNMPC is given by the use of polynomial chaos (PC) expansions, which is a computationally efficient tool for accelerating sampling-based techniques [12]. In this method, implicit mappings between variables/parameters and the states are replaced by orthogonal polynomials.…”
Section: Introductionmentioning
confidence: 99%