2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) 2019
DOI: 10.1109/aim.2019.8868711
|View full text |Cite
|
Sign up to set email alerts
|

A Feedback Force Controller Fusing Traditional Control and Reinforcement Learning Strategies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…The training results depend on the setting of random seeds, which is unstable. This is the reason why the current DRL algorithm is mainly implemented on the simulation platform and is difficult to apply to the real vehicle [29].…”
Section: Introductionmentioning
confidence: 99%
“…The training results depend on the setting of random seeds, which is unstable. This is the reason why the current DRL algorithm is mainly implemented on the simulation platform and is difficult to apply to the real vehicle [29].…”
Section: Introductionmentioning
confidence: 99%
“…In addition, this work utilized different concepts in comparison with our previous work, [30] which set the PID controller as the main control role and deep RL as the compensation controller. This work set the PID controller as the assistant for trajectory tracking and the deep imitation learning as the main trajectory generator that produced the complete trajectories offline before the task.…”
Section: Introductionmentioning
confidence: 99%