2008
DOI: 10.1177/1059712308089185
|View full text |Cite
|
Sign up to set email alerts
|

Learning Multiple Goal-Directed Actions Through Self-Organization of a Dynamic Neural Network Model: A Humanoid Robot Experiment

Abstract: We introduce a model that accounts for cognitive mechanisms of learning and generating multiple goal-directed actions. The model employs the novel idea of the so-called "sensory forward model," which is assumed to function in inferior parietal cortex for the generation of skilled behaviors in humans and monkeys. A set of different goal-directed actions can be generated by the sensory forward model by utilizing the initial sensitivity characteristics of its acquired forward dynamics. The analyses on our robotic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
27
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(27 citation statements)
references
References 29 publications
0
27
0
Order By: Relevance
“…Recently, we proposed a novel neural network model socalled the sensory forward model which utilizes distributed representation scheme embedding multiple goal-directed behaviors in a single neural network model (Nishimoto, Namikawa, & Tani, 2008). The sensory forward model (Nishimoto et al, 2008) anticipates coming sensation of visuo-proprioceptive (VP) state (the egocentric visual state and the body posture state) based on specified goal by means of forward dynamics of continuous-time recurrent neural network (CTRNN) model (Doya & Yoshizawa, 1989).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, we proposed a novel neural network model socalled the sensory forward model which utilizes distributed representation scheme embedding multiple goal-directed behaviors in a single neural network model (Nishimoto, Namikawa, & Tani, 2008). The sensory forward model (Nishimoto et al, 2008) anticipates coming sensation of visuo-proprioceptive (VP) state (the egocentric visual state and the body posture state) based on specified goal by means of forward dynamics of continuous-time recurrent neural network (CTRNN) model (Doya & Yoshizawa, 1989).…”
Section: Introductionmentioning
confidence: 99%
“…The sensory forward model (Nishimoto et al, 2008) anticipates coming sensation of visuo-proprioceptive (VP) state (the egocentric visual state and the body posture state) based on specified goal by means of forward dynamics of continuous-time recurrent neural network (CTRNN) model (Doya & Yoshizawa, 1989). By utilizing the initial sensitivity characteristics of the nonlinear neuro-dynamics, different anticipatory trajectories of VP patterns are learned to be generated depending on the initial states given as the desired goals.…”
Section: Introductionmentioning
confidence: 99%
“…In the context of anticipation mechanisms while manipulating objects, Nishimoto et al (2008) proposed a dynamic neural network model of interactions between the inferior parietal lobe (IPL), representing human behavioural skills related to object manipulation and tool usage, and cells in the ventral premotor area (PMv), allowing learning, generation and recognition of goal-directed behaviours.…”
Section: Object Manipulationmentioning
confidence: 99%
“…If the role of an ANN is to predict what the next input will be rather than to provide an output, then the error signal is available: the difference between what the ANN predicted and what has actually happened. Specific implementations of predictive behaviors in robots include anticipatory mechanisms in vision (Hoffmann, 2007;Datteri et al, 2003), object manipulation (Nishimoto et al, 2008;Laschi et al, 2008), and locomotion (Azevedo et al, 2004;Gross et al, 1998), as described in the following subsections.…”
mentioning
confidence: 99%
“…Recently, several works have proposed models that aims at explaining how mirror neurons can emerge from sensory-motor associations learning [10] [12]. These works focus on the emergence of action recognition capabilities.…”
Section: Introductionmentioning
confidence: 99%