2017
DOI: 10.1016/j.ifacol.2017.08.2462
|View full text |Cite
|
Sign up to set email alerts
|

Particle Model Predictive Control: Tractable Stochastic Nonlinear Output-Feedback MPC

Abstract: We combine conditional state density construction with an extension of the Scenario Approach for stochastic Model Predictive Control to nonlinear systems to yield a novel particle-based formulation of stochastic nonlinear output-feedback Model Predictive Control. Conditional densities given noisy measurement data are propagated via the Particle Filter as an approximate implementation of the Bayesian Filter. This enables a particle-based representation of the conditional state density, or information state, whi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
3
3

Relationship

2
7

Authors

Journals

citations
Cited by 29 publications
(19 citation statements)
references
References 21 publications
0
19
0
Order By: Relevance
“…Similarly to Section II, we denote by: J ∞ (π) the infinitehorizon optimal value function; µ N the sequence of optimal policies in (8)(9); µ N 0 the first element of this sequence; µ N MPC {µ N 0 , µ N 0 , . .…”
Section: Stochastic Model Predictive Controlmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly to Section II, we denote by: J ∞ (π) the infinitehorizon optimal value function; µ N the sequence of optimal policies in (8)(9); µ N 0 the first element of this sequence; µ N MPC {µ N 0 , µ N 0 , . .…”
Section: Stochastic Model Predictive Controlmentioning
confidence: 99%
“…These performance bounds are available in both the deterministic [3] and the stochastic [4] settings, were one ever able to solve the underlying finite-horizon stochastic problem computationally. While approximation of SMPC based on Stochastic Optimal Control via more tractable surrogate problems is possible, such as for instance in [5]- [8], one generally loses the associated closed-loop guarantees, in particular regarding infinite-horizon performance of the generated control laws.…”
Section: Introductionmentioning
confidence: 99%
“…That is, we are choosing our control actions in a Hidden Markov Model (HMM [19]) setup. Given control action u t = a and measured output y t+1 = θ, the Bayesian Filter recursion (3)(4) extends to the POMDP dynamics (8-9) as π t+1,j = i∈X π t,j p a ij r a jθ i,j∈X π t,j p a ij r a jθ , where π t,j denotes the j th entry of the row vector π t . We define the cost as in Section II, with stage cost c(x t , u t ) = c a i if x t = i ∈ X and u t = a ∈ U, or e i c(a) in vectorized form.…”
Section: A Partially Observable Markov Decisions Processesmentioning
confidence: 99%
“…The main difficulty of SNMPC is propagating continuous stochastic uncertainties through nonlinear equations without being prohibitively computationally expensive. Several methods have been proposed for approximating this case, some of which have been successfully applied to formulate SNMPC approaches: Unscented transformations [11], Polynomial chaos expansions (PCE) [12], Quasi Monte Carlo (MC) [13], Markov Chain MC [14], Gaussian processes [15], Gaussian mixtures (GM) [16], Fokker-Planck [17], linearization [18], and particle filters [19]. The control of systems with discrete stochastic uncertainties has been addressed in [20].…”
Section: Introductionmentioning
confidence: 99%