2021
DOI: 10.1162/evco_a_00286
|View full text |Cite
|
Sign up to set email alerts
|

Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

Abstract: A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable l… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
3
3
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…An early example of a plastic neural network governed by a single learning rule is that of Chalmers [9] that evolves a learning rule to update randomly initialized weights of a shallow neural network. A more recent approach that also falls into this category is presented by Yaman et al [35]. In this work, the authors evolve a single discrete Hebbian rule to change the synapses of a randomly initialized network to solve a simple foraging task.…”
Section: Related Workmentioning
confidence: 99%
“…An early example of a plastic neural network governed by a single learning rule is that of Chalmers [9] that evolves a learning rule to update randomly initialized weights of a shallow neural network. A more recent approach that also falls into this category is presented by Yaman et al [35]. In this work, the authors evolve a single discrete Hebbian rule to change the synapses of a randomly initialized network to solve a simple foraging task.…”
Section: Related Workmentioning
confidence: 99%
“…Past studies [44,45] applied Hebbian learning to evolve ANN controllers for mobile robots, and found improved performance over non-plastic ANNs. More recently, several studies have successfully evolved Hebbian learning rules [46,47,48,49] and achieved competitive results in reinforcement learning scenarios. For these reasons, we adopt Hebbian learning to model synaptic plasticity.…”
Section: Related Workmentioning
confidence: 99%
“…While several works [59,60,48] have successfully employed this "generalized" Hebbian learning to train ANNs, many variations of it do exist [16]. In this work, we use the socalled Hebbian ABCD model [61,62,63,47], which updates the weights according to: We call the four ABCD coefficients together a rule, and, in our model, there exists one separate rule per synapse.…”
Section: B Hebbian Learningmentioning
confidence: 99%
“…For instance, some works neglect the neural modulator m (Najarro & Risi, 2020;Miconi et al, 2018), others have set B, C, and D to 0 (Miconi et al, 2018(Miconi et al, , 2019. The learned rule A, B, C, D will inevitably be dependent on initial parameter of W , however, learning plastic rules that is not dependent on the initial parameters was also investigated (Najarro & Risi, 2020;Yaman et al, 2021).…”
Section: Evolving Plasticitymentioning
confidence: 99%
“…In contrast, its parameters scale with O(n 2 ), which makes parameter-updating more powerful as learning mechanisms than recursion-only. Prior to our work, evolving plasticity (Soltoggio et al, 2008(Soltoggio et al, , 2018Lindsey & Litwin-Kumar, 2020;Yaman et al, 2021) has been proposed to reproduce the natural evolution and plasticity in simulation, as shown in Figure 1. Implementing plasticity is not straight-forward, unlike gradient-descent, plastic rules are not universal but have to be optimized beforehand, which is not possible without a further outer-loop optimizer over the inner-loop learning.…”
Section: Introductionmentioning
confidence: 99%