2012 IEEE/RSJ International Conference on Intelligent Robots and Systems 2012
DOI: 10.1109/iros.2012.6386072
|View full text |Cite
|
Sign up to set email alerts
|

Generalization of human grasping for multi-fingered robot hands

Abstract: Abstract-Multi-fingered robot grasping is a challenging problem that is difficult to tackle using hand-coded programs. In this paper we present an imitation learning approach for learning and generalizing grasping skills based on human demonstrations. To this end, we split the task of synthesizing a grasping motion into three parts: (1) learning efficient grasp representations from human demonstrations, (2) warping contact points onto new objects, and (3) optimizing and executing the reach-and-grasp movements.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(36 citation statements)
references
References 29 publications
0
34
0
Order By: Relevance
“…A known object's geometry is warped until it matches that of a novel object, thereby also warping grasp points on the surface of the known object onto candidate grasp points on the novel object. Ben Amor et al (2012) exploit this warping method to transfer grasps taught by a human hand (using a data glove) to contact points for a robot hand on a novel object. We compute a full hand grasping configuration for a novel object, using a grasp model that is learned from a single or a few example grasps.…”
Section: Ijrr -(-)mentioning
confidence: 99%
See 1 more Smart Citation
“…A known object's geometry is warped until it matches that of a novel object, thereby also warping grasp points on the surface of the known object onto candidate grasp points on the novel object. Ben Amor et al (2012) exploit this warping method to transfer grasps taught by a human hand (using a data glove) to contact points for a robot hand on a novel object. We compute a full hand grasping configuration for a novel object, using a grasp model that is learned from a single or a few example grasps.…”
Section: Ijrr -(-)mentioning
confidence: 99%
“…This works well for low DoF hands. Another class of approaches captures the global properties of the hand shape either at the point of grasping, or during the approach (Ben Amor et al, 2012). This global hand shape can additionally be associated with global object shape, allowing generalisation by warping grasps to match warps of global object shape (Hillenbrand & Roa, 2012).…”
Section: Introductionmentioning
confidence: 99%
“…A subclass of these focused on learning a mapping from incomplete object views to grasp parameters [1], [8], [11], [15], [14], [19], while another subclass has aimed at transferring grasps across objects whose complete 3D shape is known [2], [12]. When addressing the problem of grasping a partiallyperceived object, authors have developed means of learning how to place the wrist of the gripper with respect to the part of the object that is perceived by the robot's vision system [8], [11], [15], [14].…”
Section: Related Workmentioning
confidence: 99%
“…This works well for low DoF hands. Another class of approaches captures the global properties of the hand shape either at the point of grasping, or during the approach [2]. This global hand shape can additionally be associated with global object shape, allowing generalisation by warping grasps to match warps of global object shape [12].…”
Section: Introductionmentioning
confidence: 99%
“…Santello et al [10] show that the variation in final imagined grasp poses for a large number of objects is quite small, with well over 80% of the variation explained by only two principal components. Recently, Ben Amor et al [11] used a low-dimensional sub-space built from a database of recorded human grasping postures to perform grasp optimization on a robot. The focus of their work is complementary to ours in that their approach synthesizes motions for "reach-and-grasp" tasks.…”
Section: Related Workmentioning
confidence: 99%