2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2015
DOI: 10.1109/iros.2015.7353413
|View full text |Cite
|
Sign up to set email alerts
|

Learning bimanual end-effector poses from demonstrations using task-parameterized dynamical systems

Abstract: Very often, when addressing the problem of human-robot skill transfer in task space, only the Cartesian position of the end-effector is encoded by the learning algorithms, instead of the full pose. However, orientation is just as important as position, if not more, when it comes to successfully performing a manipulation task. In this paper, we present a framework that allows robots to learn the full poses of their end-effectors in a task-parameterized manner. Our approach permits the encoding of complex skills… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0
2

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 62 publications
(74 citation statements)
references
References 19 publications
0
72
0
2
Order By: Relevance
“…For simplicity of the experiment, the orientation of the robot end-effector was kept constant. However, the approach also supports tasks with variable end-effector orientations [49].…”
Section: Methodsmentioning
confidence: 99%
“…For simplicity of the experiment, the orientation of the robot end-effector was kept constant. However, the approach also supports tasks with variable end-effector orientations [49].…”
Section: Methodsmentioning
confidence: 99%
“…In previous work, task parameters have been used to represent poses of objects in a robot workspace, mapping local models of demonstrations (from the perspective of P different objects) onto a global coordinate system, typically the robot base frame. In this case, A (j) is a rotation matrix [8], [10], [37], [38] or a quaternion matrix [9], representing an object orientation, and b (j) is a translation vector, representing the origin of an object coordinate system with respect to the base frame of the robot. It was common in previous work to refer to task parameters as candidate frames or candidate coordinate systems.…”
Section: A Overview and Nomenclaturementioning
confidence: 99%
“…For this reason, most research on bimanual skill learning exploits operational space formulations (e.g. [3], [4], [5], [6], [7], [8], [9]), that focus on task space constraints, e.g. demonstrated coordination between end-effectors and object-related movements, that need to be reproduced precisely in order to successfully complete a task.…”
Section: Introductionmentioning
confidence: 99%
“…We use unit quaternions to represent orientation data, as in Silvério et al (2015). We chose this representation as it is singularity-free and uses only 4 parameters.…”
Section: Orientation Datamentioning
confidence: 99%
“…This way we can directly use this representation of orientations under the TP-HSMM formulation by setting ξ n =ˆ n , b n, j = 0 and A n, j = E n, j for the orientation part of the task parameters, see Silvério et al (2015) for details. The projection of Eq.…”
Section: Orientation Datamentioning
confidence: 99%