2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8460986
|View full text |Cite
|
Sign up to set email alerts
|

Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

Abstract: In order to robustly execute a task under environmental uncertainty, a robot needs to be able to reactively adapt to changes arising in its environment. The environment changes are usually reflected in deviation from expected sensory traces. These deviations in sensory traces can be used to drive the motion adaptation, and for this purpose, a feedback model is required. The feedback model maps the deviations in sensory traces to the motion plan adaptation. In this paper, we develop a general data-driven framew… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(30 citation statements)
references
References 21 publications
0
30
0
Order By: Relevance
“…The first includes all those approaches that require an additional learning phase, in which executions both with and without obstacle are recorded, to model ϕ. For instance, in [29] and [37] a Neural Network is used to model the perturbation term. In [28] an analytical formulation is presented, but the number of free parameters that has to be tuned requires an additional learning process.…”
Section: Methods For Obstacle Avoidancementioning
confidence: 99%
See 1 more Smart Citation
“…The first includes all those approaches that require an additional learning phase, in which executions both with and without obstacle are recorded, to model ϕ. For instance, in [29] and [37] a Neural Network is used to model the perturbation term. In [28] an analytical formulation is presented, but the number of free parameters that has to be tuned requires an additional learning process.…”
Section: Methods For Obstacle Avoidancementioning
confidence: 99%
“…Other approaches (e.g. [22,28,29,37]) require multiple demonstrations with different types and sizes of the obstacles to learn the obstacle avoidance behavior.…”
Section: Introductionmentioning
confidence: 99%
“…Imitation learning (IL), also referred to as learning from demonstration (LfD), is a powerful approach for complex manipulation tasks, which perceives and reproduces human movements without the need of explicit programming of behavior ( Takamatsu et al, 2007 ; Kormushev et al, 2011 ; Suomalainen and Kyrki, 2017 ; Hu et al, 2020 ). Among the IL approaches, DMPs ( Ijspeert et al, 2013 ) have shown the ability to generalize demonstrations in different manipulation tasks ( Peters and Schaal, 2008 ; Metzen et al, 2014 ; Hu et al, 2018 ; Sutanto et al, 2018 ). However, the forces and torques that a human applies during the demonstrations of contact-rich tasks are required to regress a proper admittance gain of robot controller ( Tang et al, 2016 ) or to match with modified demonstrated trajectories using DMPs ( Abu-Dakka et al, 2015 ; Savarimuthu et al, 2017 ).…”
Section: Related Workmentioning
confidence: 99%
“…Although DMPs have demonstrated to be successful in many applications, their generalization ability is limited. In recent work, DMPs have been extended to adapt to unknown sensor measurements via a feedback model learned from demonstration [14] [15]. Aside from attractor systems, many different general-purpose policy representations have been developed.…”
Section: A Robust Policy Representationsmentioning
confidence: 99%