2014
DOI: 10.1007/978-3-319-11179-7_88
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Look: A Dynamic Neural Fields Architecture for Gaze Shift Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…E.g., the vector-integration to end-point (VITE) neuronal motor-control model generates movement based on the currently estimated pose and the stored goal pose (Grossberg, 1988). Alternatively, one can use a saccadic eye-movement-generating neuronal architecture (Bell et al, 2014;Storck, 2014, 2015) to initiate the gaze to the memorized pose.…”
Section: Representing the Visual Scene In The Network (Map Formation)mentioning
confidence: 99%
“…E.g., the vector-integration to end-point (VITE) neuronal motor-control model generates movement based on the currently estimated pose and the stored goal pose (Grossberg, 1988). Alternatively, one can use a saccadic eye-movement-generating neuronal architecture (Bell et al, 2014;Storck, 2014, 2015) to initiate the gaze to the memorized pose.…”
Section: Representing the Visual Scene In The Network (Map Formation)mentioning
confidence: 99%
“…ballistic) camera movements toward objects (here, the salient portions of the visual input). This adaptive looking system has been introduced recently [11], [12], [10] and is described briefly in this paper for completeness in Section III-A. The looking module generates motor commands that drive the motors of the camera head (the pink solid line in the figure), and at the same time creates a representation of the selected object in the gaze-direction space, i.e.…”
Section: A Dynamic Neural Fieldsmentioning
confidence: 99%
“…We have demonstrated how the involved sensorimotor mapping between the retinal positions of the selected objects and the motor commands, needed to saccade toward these objects may be initially learned and constantly updated [9], [10], [11]. We have implemented the architecture to control an autonomous humanoid robot to show that the model indeed may generate behaviour and learn autonomously in a real-world setting [12].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, we have demonstrated how the presented architecture is capable of learning to perform precise saccades and to adapt to changes in the environment or in the sensorimotor plant [11]. Here, we modified the learning processes by initialising the gain maps with small random numbers and simulating a more natural learning process, in which the maps are learned in a less controlled learning session.…”
Section: A Gain Maps Learningmentioning
confidence: 99%
“…We have demonstrated the basic functionality of this model recently [11]. Here, we focus on the capability of the network to predict the motor outcome of the potential saccade, to use this prediction to create memories in a motor-based (non-retinal) reference frame, and, finally, to perform a sequence of saccades from memory.…”
Section: Introductionmentioning
confidence: 99%