2020 American Control Conference (ACC) 2020
DOI: 10.23919/acc45564.2020.9147287
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning for Control: a non-Reinforcement Learning View

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…Care must be taken when learning state dependent control maps for unstable system. If the learning algorithm is not properly initialized, the system instability can induce instability in the learning algorithm as well [30]. In [27], the authors use a hybrid modeling and learning methodology to deal with catching in-flight objects with uneven shapes.…”
Section: State Of the Art In Hybrid Modelingmentioning
confidence: 99%
“…Care must be taken when learning state dependent control maps for unstable system. If the learning algorithm is not properly initialized, the system instability can induce instability in the learning algorithm as well [30]. In [27], the authors use a hybrid modeling and learning methodology to deal with catching in-flight objects with uneven shapes.…”
Section: State Of the Art In Hybrid Modelingmentioning
confidence: 99%
“…In the case where the model is represented as an ODE, we obtained good results. For example, in [19] we showed how to learn control policies for an inverted pendulum using a model predictive control approach solved using Pytorch. When dealing with DAEs though, the gradient-based optimization algorithm, when combined with direct collocation methods to approximate time derivatives, tend to converge slowly.…”
Section: Differential Programming For Gradient-based Optimizationmentioning
confidence: 99%