2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793780
|View full text |Cite
|
Sign up to set email alerts
|

Imitating Human Search Strategies for Assembly

Abstract: We present a Learning from Demonstration method for teaching robots to perform search strategies imitated from humans in scenarios where alignment tasks fail due to position uncertainty. The method utilizes human demonstrations to learn both a state invariant dynamics model and an exploration distribution that captures the search area covered by the demonstrator. We present two alternative algorithms for computing a search trajectory from the exploration distribution, one based on sampling and another based on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…In view of the pick-and-place processes only one policy is learned for the whole assembly process therein, never-theless, it may fail to work in the small parts assembly tasks since features of the assembly motion vary significantly between the two phases. Another formulation of small parts assembly tasks is peg-in-hole problem [14,16,26] and Reinforcement Learning (RL) based methods are widely applied to learn a decision making policy that maps states to actions through trial-anderror [7,8,12,27]. RL-based approaches typically require the robot to explore the state space.…”
Section: Initial Framementioning
confidence: 99%
“…In view of the pick-and-place processes only one policy is learned for the whole assembly process therein, never-theless, it may fail to work in the small parts assembly tasks since features of the assembly motion vary significantly between the two phases. Another formulation of small parts assembly tasks is peg-in-hole problem [14,16,26] and Reinforcement Learning (RL) based methods are widely applied to learn a decision making policy that maps states to actions through trial-anderror [7,8,12,27]. RL-based approaches typically require the robot to explore the state space.…”
Section: Initial Framementioning
confidence: 99%
“…Many strategies to solve this problem depend on expensive force and torque sensors [17], [18]. Sensorless strategies [19], [20], [21], on the other hand, rely only on the state of the end-effector and provide a low cost solution. However, most of the sensorless strategies depend either on a predetermined trajectory that the robot end-effector needs to follow [21], or a full modeling of the insertion behavior [19], [20].…”
Section: Insertion Tasksmentioning
confidence: 99%
“…Sensorless strategies [19], [20], [21], on the other hand, rely only on the state of the end-effector and provide a low cost solution. However, most of the sensorless strategies depend either on a predetermined trajectory that the robot end-effector needs to follow [21], or a full modeling of the insertion behavior [19], [20]. In [21], insertion is treated as a 3D (2D position and 1D orientation of the end-effector) trajectory tracking problem, where the reference trajectory for the robot end-effector is generated offline by using a coverage strategy such as ergodic control.…”
Section: Insertion Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…Another method is the so-called learning from demonstration method (LfD), where the human transfers its motor skills to the robot by leading the robot with hands. A complex framework is necessary to recover the important parts of the motion and to execute the task successfully [12,13]. To avoid the need of a human operator, self-learning methods are becoming more popular in the last century.…”
Section: Introductionmentioning
confidence: 99%