Reinforcement Learning and Approximate Dynamic Programming for Feedback Control 2012
DOI: 10.1002/9781118453988.ch11
|View full text |Cite
|
Sign up to set email alerts
|

Online Optimal Control of Nonaffine Nonlinear Discrete‐Time Systems without Using Value and Policy Iterations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…However, iterative ADP can only be used to calculate offline due to its long-time calculation caused by uncertain iteration times. In recent years, online ADP strategies are proposed widely [14][15][16][17]. They can obtain the optimal solution in an adaptive means rather than by offline calculation.…”
Section: Introductionmentioning
confidence: 99%
“…However, iterative ADP can only be used to calculate offline due to its long-time calculation caused by uncertain iteration times. In recent years, online ADP strategies are proposed widely [14][15][16][17]. They can obtain the optimal solution in an adaptive means rather than by offline calculation.…”
Section: Introductionmentioning
confidence: 99%