2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8794127
|View full text |Cite
|
Sign up to set email alerts
|

Residual Reinforcement Learning for Robot Control

Abstract: Conventional feedback control methods can solve various types of robot control problems very efficiently by capturing the structure with explicit models, such as rigid body equations of motion. However, many control problems in modern manufacturing deal with contacts and friction, which are difficult to capture with first-order physical modeling. Hence, applying control design methodologies to these kinds of problems often results in brittle and inaccurate controllers, which have to be manually tuned for deplo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
214
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 323 publications
(234 citation statements)
references
References 18 publications
2
214
0
Order By: Relevance
“…This approach provides a wider range of data-driven corrections that can compensate for noisy observations as well as dynamics that are not explicitly modeled. These benefits are also observed in concurrent work on residual reinforcement learning [16,30] in block-assembly and object manipulation tasks.…”
Section: Related Workmentioning
confidence: 67%
“…This approach provides a wider range of data-driven corrections that can compensate for noisy observations as well as dynamics that are not explicitly modeled. These benefits are also observed in concurrent work on residual reinforcement learning [16,30] in block-assembly and object manipulation tasks.…”
Section: Related Workmentioning
confidence: 67%
“…Indeed, it is often necessary in practice to implement gravity compensation to learn successful manipulation strategies using torque control [11], [5]. Johannink et al have shown that formulating the learning task to learn residuals to a hand-crafted model-based controllers can improve learning efficiency [12]. Another approach is to design the action space in terms of references for an underlying model-based controller.…”
Section: Introductionmentioning
confidence: 99%
“…While there is some work that formulates the action space in terms of a low-level Cartesian position controller for tasks such as block stacking [12], pushing, and pick and place [30], there is surprisingly little work in reinforcement learning that uses impedance control to combine the ideas of task space control with the mechanical compliance necessary to perform delicate manipulation tasks under uncertainty, and no work to our knowledge that attempts a systematic comparison between alternative policy structures in this domain.…”
Section: Introductionmentioning
confidence: 99%
“…Imperfect models are common in robotic systems. They have been studied widely in various contexts, e.g., to account for uncertainty in the planning model [13,16,23,49,44] or to directly learn strategies that are robust against imperfections in environment models [11,2], policies [24,42], or approximate algorithms [40]. Unlike the earlier work, the DAN commits to the algorithm choices, but adapts the models to compensate for imperfections through end-to-end training from data.…”
Section: Related Workmentioning
confidence: 99%