2007
DOI: 10.1007/s00422-007-0176-y
|View full text |Cite
|
Sign up to set email alerts
|

Chained learning architectures in a simple closed-loop behavioural context

Abstract: By implementing two types of simple chained learning architectures we have demonstrated that stable behaviour can also be obtained in such architectures. Results also suggest that chained architectures can be employed and better behavioural performance can be obtained compared to simple architectures in cases where we have sparse inputs in time and learning normally fails because of weak correlations.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
7
1

Relationship

5
3

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 41 publications
0
13
0
Order By: Relevance
“…Learning, storing, inferring and executing sequences is a key topic in experimental [71][79], and theoretical neurosciences [80][82]; and robotics [83][86]. An early approach to modelling sequence processing focussed on feed-forward architectures.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Learning, storing, inferring and executing sequences is a key topic in experimental [71][79], and theoretical neurosciences [80][82]; and robotics [83][86]. An early approach to modelling sequence processing focussed on feed-forward architectures.…”
Section: Discussionmentioning
confidence: 99%
“…This requires the ‘dynamic fusion’ of bottom-up sensory input and top-down predictions, Several authors e.g., [83], [89][92] use recurrent networks to implement this fusion. Exact Bayesian schemes based on discrete hierarchical hidden Markov models, specified as a temporal hierarchy, have been used to implement memory and recognition [93].…”
Section: Discussionmentioning
confidence: 99%
“…Both rules are stable and stability for x 0 = 0 can be mathematically proved for both rules even when using filter banks ( Porr and Wörgötter 2006 , 2007 ). These rules have now been successfully tested in a variety of different applications ( Porr and Wörgötter 2006 ; Kolodziejski et al 2006 , 2007 ; Manoonpong et al 2007 ) and even chains of learning neurons can be constructed in a convergent way ( Kulvicius et al 2007 ).…”
Section: Results When Using a Filter Bankmentioning
confidence: 99%
“…This has the advantage that the error e j can directly be identified with the reflex action (see Figure 1), so that v k by design then executes both the reflex and the learned/adaptive actions. However, because the error signal e j and learned signals v j are mixed together at v k , information is lost which cannot be recovered for deeper layers, thereby restricting learning to shallow network architectures (Kulvicius, Porr, & Wo¨rgo¨tter, 2007). This also means that both error and learned signals need to have a direct behavioural meaning because both will eventually cause a behavioural output.…”
Section: Discussionmentioning
confidence: 99%