2017
DOI: 10.1007/978-3-319-60771-9_1
|View full text |Cite
|
Sign up to set email alerts
|

Optimality Conditions (in Pontryagin Form)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 111 publications
0
3
0
Order By: Relevance
“…for all t ∈ [0, T f ] and β ≥ 0, where is just the vector perpendicular to . In Appendix C , we show the details of the calculation based on Pontryagin's maximum principle (Aronna et al, 2017 ), showing that this indeed meets the requirements of the optimal control problem described in Equations (47)–(49). Substituting the specific solution described in Equation (52) into the dynamics of Equation (47) while also writing the tangent vector in terms of the angle θ( s, t ) between and the stimulus direction , i.e., , yields the following dynamical equation:…”
Section: Example Of An Optimal Control Approachmentioning
confidence: 55%
“…for all t ∈ [0, T f ] and β ≥ 0, where is just the vector perpendicular to . In Appendix C , we show the details of the calculation based on Pontryagin's maximum principle (Aronna et al, 2017 ), showing that this indeed meets the requirements of the optimal control problem described in Equations (47)–(49). Substituting the specific solution described in Equation (52) into the dynamics of Equation (47) while also writing the tangent vector in terms of the angle θ( s, t ) between and the stimulus direction , i.e., , yields the following dynamical equation:…”
Section: Example Of An Optimal Control Approachmentioning
confidence: 55%
“…49: for all t ∈ [0, T f ] and β ≥ 0, where is just the perpendicular vector to . In Appendix C in the SM we show the details of the calculation based on the Pontryagin’s maximum principle (Aronna et al, 2017), showing that this indeed meets the requirements of the optimal control problem described in Eqs. 46, 47, 48.…”
Section: Example Of An Optimal Control Approachmentioning
confidence: 69%
“…One prominent approach to find an optimal feedback law is calculating the value function, which can be done by solving either the Bellman equation or the Hamilton-Jacobi-Bellman equation (HJB). Popular numerical solutions to this problem are semi-Lagrangian methods [17,22,66], Domain splitting algorithms [23], variational iterative methods [35], data based methods with Neural Networks [47,49] or Policy Iteration with Galerkin ansatz [36,45].…”
Section: Introductionmentioning
confidence: 99%