2017
DOI: 10.48550/arxiv.1709.10087
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
220
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 139 publications
(221 citation statements)
references
References 0 publications
1
220
0
Order By: Relevance
“…Data driven models are observed to be utilized frequently for speculating the behavior of complex systems in various domains such as biological [1], mechanical [2], [3], robotics [4]- [6], and energy forecasting [7] systems. In certain cases, wherein a "full physics" (high fidelity) model is unavailable or is computationally prohibitive to evaluate, a "partial physics" (lower fidelity) model can be used for prediction.…”
Section: A Physics-infused Hybrid Modelingmentioning
confidence: 99%
See 1 more Smart Citation
“…Data driven models are observed to be utilized frequently for speculating the behavior of complex systems in various domains such as biological [1], mechanical [2], [3], robotics [4]- [6], and energy forecasting [7] systems. In certain cases, wherein a "full physics" (high fidelity) model is unavailable or is computationally prohibitive to evaluate, a "partial physics" (lower fidelity) model can be used for prediction.…”
Section: A Physics-infused Hybrid Modelingmentioning
confidence: 99%
“…An ideal transfer model maps the transferred feature for the partial physics model such that the partial physics model generates the actual outputs (i.e., the full physics experimental values). Equation (6) show the role of transfer model for the partial physics model.…”
Section: B Partial Physics Acoustic Modelmentioning
confidence: 99%
“…Van Hoof et al [40] learned the tactile informed policy via RL for a three-finger manipulator to move an object on the table. To reduce the sample complexity of model-free learning, [9,13,8,41,6] combined imitation learning with RL to learn to reorient a pen, open a door, assemble LEGO blocks, etc. However, collecting expert demonstration data from humans is expensive, time-consuming, and even incredibly difficult for contact-rick tasks [8].…”
Section: Related Workmentioning
confidence: 99%
“…As such, it remains unclear how to scale these methods to real-world and generalize to new objects. Another line of work has employed (ii) model-based reinforcement learning [4,5]; or (iii) model-free reinforcement learning with [6,7,8,9] and without expert demonstrations [10,11,12,13]. While some of these works demonstrated learned skills on real robots, it required use of additional sensory apparatus not readily available in the real-world (e.g., motion capture system) to infer the object state, and the learned policies did not generalize to diverse objects.…”
Section: Introductionmentioning
confidence: 99%
“…Most robot manipulation simulations often concentrate on solving specific manipulation problems, such as grasping [17,18] and in-hand manipulation [19][20][21]. The Recent MetaWorld [22] and RLBench [23] contain a variety of manipulation tasks for robot learning.…”
Section: A Simulation Environments For Robotic Manipulationmentioning
confidence: 99%