2007
DOI: 10.1142/s0219843607001096
|View full text |Cite
|
Sign up to set email alerts
|

Body Image Constructed From Motor and Tactile Images With Visual Information

Abstract: This paper proposes a learning model that enables a robot to acquire a body image for parts of its body that are invisible to itself. The model associates spatial perception based on motor experience and motor image with perception based on the activations of touch sensors and tactile image, both of which are supported by visual information. The tactile image can be acquired with the help of the motor image, which is thought to be the basis for spatial perception, because all spatial perceptions originate in m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(17 citation statements)
references
References 11 publications
0
17
0
Order By: Relevance
“…In those examples, the number of DOFs of the arm is generally quite low (between two and five). Using self-organizing maps, Fuke et al 9 learn the correspondence between visual, proprioceptive and tactile informations in a simulated arm-face system. The work presented here differs from previous approaches in that the learning is performed entirely online and it can deal with a high number of DOFs.…”
Section: Related Workmentioning
confidence: 99%
“…In those examples, the number of DOFs of the arm is generally quite low (between two and five). Using self-organizing maps, Fuke et al 9 learn the correspondence between visual, proprioceptive and tactile informations in a simulated arm-face system. The work presented here differs from previous approaches in that the learning is performed entirely online and it can deal with a high number of DOFs.…”
Section: Related Workmentioning
confidence: 99%
“…This is in line with findings by Longo and Lourenco (2007) stating that when wielding a tool with the hand, the tool is integrated into the body schema which can be interpreted as a manipulation of the arm length and therefore extends the size of peripersonal space. A lot of research in robotics was inspired by the concept of an adaptive body schema which offers a mechanism to learn tool use and to save engineers from laborious work on predefining an artificial articulated agent's -possibly changing body structure (Nabeshima et al, 2006;Fuke et al, 2007). More recently, work with different approaches on connecting body schema learning with interpretations of peripersonal space for articulated agents have also been presented (Hersch et al, 2008;Fuke et al, 2009).…”
Section: 1mentioning
confidence: 99%
“…One approach in simulation is spatio-temporal correlation, learning topographic sensor structures from more or less random input [1]. Other simulated systems purposively seek contact with known external objects [2] or known body parts [3] to probe the sensor locations. In [4] topographic structures are used to classify interactive scenarios, like hugging or hand-shaking of human with a real robot, but the system gains no information about its motor-space.…”
Section: A Motivationmentioning
confidence: 99%