2005
DOI: 10.1007/11521082_8
|View full text |Cite
|
Sign up to set email alerts
|

Combining Visual Attention, Object Recognition and Associative Information Processing in a NeuroBotic System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2005
2005
2019
2019

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…The full complexity of cognition would then arise from many such modules interacting. Operational cell assemblies of this kind have been used to implement language and behavioural components on robots (Fay et al 2005;Knoblauch et al 2005a, b;Markert and Palm 2006;Markert et al 2005Markert et al , 2007. They can also well be implemented in modern neural hardware (Indiveri 2007;Schemmel et al 2004;Wijekoon and Dudek 2008) and may therefore form a programming paradigm for future computing hardware (Palm 1982;Wennekers 2006).…”
Section: Introductionmentioning
confidence: 97%
“…The full complexity of cognition would then arise from many such modules interacting. Operational cell assemblies of this kind have been used to implement language and behavioural components on robots (Fay et al 2005;Knoblauch et al 2005a, b;Markert and Palm 2006;Markert et al 2005Markert et al , 2007. They can also well be implemented in modern neural hardware (Indiveri 2007;Schemmel et al 2004;Wijekoon and Dudek 2008) and may therefore form a programming paradigm for future computing hardware (Palm 1982;Wennekers 2006).…”
Section: Introductionmentioning
confidence: 97%
“…The details of the neuron model and the learning rule are given in Appendix A. The complete layout of all modules comprising our system (as of today) is described in Fay et al (2004). In order to demonstrate the functionality of the cortical network we have embedded it into a simple robot scenario.…”
Section: Introductionmentioning
confidence: 99%
“…Closely related to our work are the approaches of Arbib [12], Roy [13], Kirchmar and Edelmann [14] and of Billard and Hayes [15]. However, to our knowledge this is the first robot control architecture including simple language understanding, visual object recognition and action planning, that is realized completely by neural networks [11] and that is able to resolve ambiguities and to learn new words during performance [10]. It also represents the first real-time functional simulation of populations of spiking neurons in more than ten cortex areas in cooperation.…”
Section: Discussionmentioning
confidence: 80%
“…To show the correct semantical understanding of parsed sentences by the system, the model is embedded into a robot [11]. Therefore, the system is extended by a neural action planning part, some simple motor programs (e.g.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation