2016
DOI: 10.1080/00207179.2016.1222553
|View full text |Cite
|
Sign up to set email alerts
|

From linear to nonlinear MPC: bridging the gap via the real-time iteration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
166
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 234 publications
(168 citation statements)
references
References 39 publications
1
166
0
1
Order By: Relevance
“…Next, assuming (20) holds for k − 1 and recalling that a( ) < 1, we can apply (29) at iteration k to show that…”
Section: Liss Of Time-distributed Optimizationmentioning
confidence: 99%
“…Next, assuming (20) holds for k − 1 and recalling that a( ) < 1, we can apply (29) at iteration k to show that…”
Section: Liss Of Time-distributed Optimizationmentioning
confidence: 99%
“…Here, we only give a brief summary of the methodology. For any further details we refer the reader to [17], [18]. Following the SQP approach we linearize the system model with a previous guess (x g k ,ū g k ) and use any numerical discretization scheme to obtain the sensitivities…”
Section: Problem Statementmentioning
confidence: 99%
“…At every sampling instant, Problem (3) is solved, and only the first control input u 0 is applied to the system. However, the solutionsx andū are used to update the guesses (x g k ,ū g k ) = (x k+1 ,ū k+1 ), where k spans the prediction horizon, following the shifting approach in [17], [18].…”
Section: Problem Statementmentioning
confidence: 99%
“…To address these problems, we propose the Sampling Augmented Adaptive RTI (SAA-RTI) algorithm, which decomposes the method of solving (2) into two distinct steps: feasible trajectory planning and trajectory optimization. The approach augments the existing RTI-SQP [16] strategy with state space sampling [10]. The horizon and sampling time for both trajectory planning and optimization steps are chosen as N and T s respectively, as in (2).…”
Section: Problem Formulationmentioning
confidence: 99%
“…However, solving (2) locally around the feasible but suboptimal trajectoryX ⋆ t can be done efficiently using a convex Quadratic Program (QP) approximation. We obtain the QP approximation of (2) through the linear time varying model predictive control paradigm [16]. At any given time t, the model and constraints in (2) are linearized aroundX ⋆ t .…”
Section: B Trajectory Optimizationmentioning
confidence: 99%