2005
DOI: 10.1007/s00422-005-0575-x
|View full text |Cite
|
Sign up to set email alerts
|

Learning visuomotor transformations for gaze-control and grasping

Abstract: For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2007
2007
2016
2016

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 24 publications
(35 citation statements)
references
References 53 publications
0
35
0
Order By: Relevance
“…First, from the mixture model, the Gaussian j was chosen for which the noisy patch had the smallest normalized Mahalanobis distance p j . Such a distance value was computed from the eigenvectors W j (a d × q matrix containing the eigenvectors in its columns), the eigenvalues Λ j (a diagonal matrix), and the residual variance per dimension σ 2 j as obtained from a spatially-localized probabilistic principal component analysis, which is part of the above EM-algorithm (Tipping and Bishop, 1999;Hoffmann et al, 2005). Second, the image patch was reconstructed based on the principal components of the chosen local model (Fig.…”
Section: Learning a Denoising Modelmentioning
confidence: 99%
“…First, from the mixture model, the Gaussian j was chosen for which the noisy patch had the smallest normalized Mahalanobis distance p j . Such a distance value was computed from the eigenvectors W j (a d × q matrix containing the eigenvectors in its columns), the eigenvalues Λ j (a diagonal matrix), and the residual variance per dimension σ 2 j as obtained from a spatially-localized probabilistic principal component analysis, which is part of the above EM-algorithm (Tipping and Bishop, 1999;Hoffmann et al, 2005). Second, the image patch was reconstructed based on the principal components of the chosen local model (Fig.…”
Section: Learning a Denoising Modelmentioning
confidence: 99%
“…The visual representation of space is maintained by a spherical-like coordinate system that is implicitly defined by the gaze direction. This representation is particularly suitable for autonomous learning and has been adopted in several recent works (Schenck et al, 2003;Hoffmann et al, 2005;Chinellato et al, 2011;Jamone et al, 2014).…”
Section: Discussionmentioning
confidence: 99%
“…Our approach uses human demonstrations, which provide a model to guide the dynamics of motion as in open-loop visuomotor transformation techniques (Hoffmann et al 2005;Natale et al 2005Natale et al , 2007Hulse et al 2009). A stable model of the high-dimensional visuomotor coordination can be learned by using only several human demonstrations, making it a very efficient, fast and intuitive way to estimate parameters of a robot visuomotor controller.…”
Section: Discussion On Controller Architecturementioning
confidence: 99%
“…Not being able to rapidly and synchronously react to perturbations can cause fatal consequences for both the robot and its environment. Solutions to robotic visual-based reaching follow either of two well-established approaches: techniques that learn visuomotor transformations (Hoffmann et al 2005;Natale et al 2005Natale et al , 2007Hulse et al 2009;Jamone et al 2012), which operate in an open-loop manner, or visual servoing techniques (Espiau et al 1992;Mansard et al 2006;Natale et al 2007;Chaumette and Hutchinson 2008;Jamone et al 2012), which are closed-loop methods. Techniques that learn the visuomotor maps are very appealing because of their simplicity and practical applications.…”
Section: Robotic Visually Aided Manipulation and Obstacle Avoidancementioning
confidence: 99%