2016 IEEE 55th Conference on Decision and Control (CDC) 2016
DOI: 10.1109/cdc.2016.7798815
|View full text |Cite
|
Sign up to set email alerts
|

A Hamilton-Jacobi-Bellman approach for the optimal control of an abort landing problem

Abstract: International audienc

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
8
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 11 publications
0
8
0
Order By: Relevance
“…To solve the control problem (P ∞ ), we will use the HJB approach as introduced in section 3. Let us mention other recent works [10,2] where an approximated control problem of (P ∞ ), involving a 4-dimensional model, is also considered by using HJB approach. In all our computations, the boundary of the domain K is defined as in Table 1.…”
Section: Computational Domain Control Constraintsmentioning
confidence: 99%
See 1 more Smart Citation
“…To solve the control problem (P ∞ ), we will use the HJB approach as introduced in section 3. Let us mention other recent works [10,2] where an approximated control problem of (P ∞ ), involving a 4-dimensional model, is also considered by using HJB approach. In all our computations, the boundary of the domain K is defined as in Table 1.…”
Section: Computational Domain Control Constraintsmentioning
confidence: 99%
“…Next, we will reconstruct the associated optimal trajectories and feedback control using different algorithms of reconstruction. Let us mention some recent works [10,2] where numerical analysis of the abort landing problem has been also investigated with a simplified model involving four-dimensional controlled systems. Here we consider the full five-dimensional control problem as in [11,12].…”
Section: Introductionmentioning
confidence: 99%
“…Merton pioneered the stochastic optimal control for solving continuous problems in asset management [2]. The Hamilton-Jacobi-Bellman (HJB) equation is a common method used by many kinds of research in solving problems from the dynamic programming under the real-world probability measure, [5]. Several authors have laid down analysis related to the stochastic control approach such as using stochastic dynamic programming to analyse the financial risk in a defined contribution (DC) pension scheme under Gaussian interest rate models by attempting is to find an optimal investment strategy, [6][7][8][9][10][11].…”
Section: Introductionmentioning
confidence: 99%
“…Solving this equation has been the subject of a vast literature in numerical analysis of PDEs. We refer the reader to [30,15,7,2]. Here we use the software ROC-HJ [6] to compute numerically the minimum time function as a solution of the HJB equation satisfied by ϑ.…”
mentioning
confidence: 99%
“…Once the value function is computed and the minimum time function is stored, one can reconstruct the optimal trajectories for different scenario without solving again the HJB equation. Indeed the trajectories can be reconstructed by appealing to the dynamic programming principle (21), see [3,2]. Here, the structure of the optimal control is obtained by (21) without requiring an analysis of the optimality conditions of the first or second order.…”
mentioning
confidence: 99%