2022
DOI: 10.1080/01691864.2022.2126729
|View full text |Cite
|
Sign up to set email alerts
|

Untying cable by combining 3D deep neural network with deep reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Simulation Open Framework Architecture (SOFA) [40] physics engine is developed for medical simulation, which uses FEM to model DO and can model the interaction between rigid and deformable objects and allows customised solvers for different mechanical objects, constraints and collision geometries. Unity with Obi [41] has been used by Weng et al (2021) [42] for generating synthetic data for bag manipulation and it could provide a good commercial environment for rope manipulation [43]. ARCsim [44] is a cloth-specific simulation environment which has been employed in many perception-focused applications [45,46].…”
Section: Simulation Environmentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Simulation Open Framework Architecture (SOFA) [40] physics engine is developed for medical simulation, which uses FEM to model DO and can model the interaction between rigid and deformable objects and allows customised solvers for different mechanical objects, constraints and collision geometries. Unity with Obi [41] has been used by Weng et al (2021) [42] for generating synthetic data for bag manipulation and it could provide a good commercial environment for rope manipulation [43]. ARCsim [44] is a cloth-specific simulation environment which has been employed in many perception-focused applications [45,46].…”
Section: Simulation Environmentsmentioning
confidence: 99%
“…There are many applications of IL in the KT/KU literature, but it is difficult to frame the task as MDP to develop an RL controller-one of the challenges is reward shaping. Fan et al (2022) [43] use DRL algorithm Deep Q-learning (DQN) [126] that takes embedded states as input and discretised grasping points and moving directions as action. They achieve a 54% success rate on single-arm knot untying knots on a table.…”
Section: Data-driven Manipulation In Knot Tying/untyingmentioning
confidence: 99%
“…Using P&P action primitives. Fan et al (2022) [93] use discrete-action DRL algorithm Deep Q-learning (DQN) [255] that takes the embedded states as input and discretised grasping points and moving directions as action. They achieve a 54% success rate using single-arm for untying knots on the table.…”
Section: Data-driven Control In Rope Manipulationmentioning
confidence: 99%