2019
DOI: 10.48550/arxiv.1910.13906
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Probabilistic performance validation of deep learning-based robust NMPC controllers

Benjamin Karg,
Teodoro Alamo,
Sergio Lucia

Abstract: Solving nonlinear model predictive control problems in real time is still an important challenge despite of recent advances in computing hardware, optimization algorithms and tailored implementations. This challenge is even greater when uncertainty is present due to disturbances, unknown parameters or measurement and estimation errors. To enable the application of advanced control schemes to fast systems and on low-cost embedded hardware, we propose to approximate a robust nonlinear model controller using deep… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 50 publications
0
5
0
Order By: Relevance
“…which define the piece-wise affine function that the neural network describes. By viewing the neurons as hyperplanes [33], an activation pattern can be defined, which assigns a binary value to every neuron in the hidden layer to model the ReLU function (8). This limits the maximum possible number of different activation patterns to…”
Section: Relu Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…which define the piece-wise affine function that the neural network describes. By viewing the neurons as hyperplanes [33], an activation pattern can be defined, which assigns a binary value to every neuron in the hidden layer to model the ReLU function (8). This limits the maximum possible number of different activation patterns to…”
Section: Relu Networkmentioning
confidence: 99%
“…The parametric description of a ReLU network described in this section implies that the network can be seen as an affine function of the input, which includes additional binary variables to describe the ReLU function (8). This fact will be exploited in the next subsection to formulate a mixed-integer linear program that can compute the set of possible outputs of the network for a given set of inputs.…”
Section: Relu Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Property 1 has been already used in the context of probabilistic scaling and validation (see Alamo et al (2019) and Karg et al (2019)). See also Tempo et al (1997) for the particularization of the result to the case r = 1 and a single constraint.…”
Section: Preliminaries: Probabilistic Upper Bound Of a Random Variablementioning
confidence: 99%
“…The first use of probabilistic validation is to compute offline a constraint tightening, following the stochastic tube-based MPC approach proposed in (Lorenzen et al, 2016), but using probabilistic validation setting instead of the scenario approach. Secondly, to guarantee recursive feasibility, we relax the constraints using a penalty function method Kerrigan and Maciejowski (2000) and, following ideas presented in Karg et al (2019), we perform an offline probabilistic design of the penalty parameter, selected among a finite-family of values, so that the desired probabilistic guarantees of the closed-loop constraint satisfaction are fulfilled.…”
Section: Introductionmentioning
confidence: 99%