2023 IEEE International Conference on Robotics and Automation (ICRA) 2023
DOI: 10.1109/icra48891.2023.10160216
|View full text |Cite
|
Sign up to set email alerts
|

DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to Reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…Nevertheless, they pose certain hypotheses about the objects and the controllers, which makes it hard to scale to more complex tasks. To overcome this limitation, deep Reinforcement Learning has been applied recently on dexterous manipulation [2,28,15,53,14,52,31]. Building on these works, incorporating demonstrations in with imitation learning also leads to better sample efficiency and more natural manipulation behaviors [55,56,4,54,72,75,38,49,3].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Nevertheless, they pose certain hypotheses about the objects and the controllers, which makes it hard to scale to more complex tasks. To overcome this limitation, deep Reinforcement Learning has been applied recently on dexterous manipulation [2,28,15,53,14,52,31]. Building on these works, incorporating demonstrations in with imitation learning also leads to better sample efficiency and more natural manipulation behaviors [55,56,4,54,72,75,38,49,3].…”
Section: Related Workmentioning
confidence: 99%
“…Building on these works, incorporating demonstrations in with imitation learning also leads to better sample efficiency and more natural manipulation behaviors [55,56,4,54,72,75,38,49,3]. However, most in-hand manipulation methods are still highly relying on visual inputs [2,28,14]. For example, Chen et al [14] propose to perform in-hand object re-orientation using depth image input, and new hardware is designed to avoid heavy occlusion.…”
Section: Related Workmentioning
confidence: 99%
“…RL algorithms have been successfully demonstrated for inhand manipulation [9][10][11][21][22][23] tasks. One key difference between these and our work is that in in-hand manipulation, the object typically starts in close proximity to the robot, whereas in our domain, we must solve the additional problem of reaching and making contact with the object.…”
Section: B Reinforcement Learning For Contact-rich Tasksmentioning
confidence: 99%
“…with continuous contact or variations in friction during object interactions. In-hand manipulation employing external vision is discussed in [3], [4] and the use of Adaptive RL (reinforcement learning) policy derived from simulation torque input in robotic hands [5]. However, despite solutions to the simto-real problem, grasping objects in cluttered environments complicates policy training.…”
Section: Related Workmentioning
confidence: 99%