Neuro-Inspired Computational Elements Conference 2022
DOI: 10.1145/3517343.3517345
|View full text |Cite
|
Sign up to set email alerts
|

Stable Lifelong Learning: Spiking neurons as a solution to instability in plastic neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1
1
1

Relationship

4
1

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…The use of structured connectomes could prevent highly dynamic weights from deteriorating behavior. This differs from a more general architecture where the behavior would change much more dramatically with small changes in weights (Schmidgall & Hays (2022b)). These perturbations become more dramatic when weights are changing simultaneously and independently.…”
Section: Discussion and Future Workmentioning
confidence: 92%
“…The use of structured connectomes could prevent highly dynamic weights from deteriorating behavior. This differs from a more general architecture where the behavior would change much more dramatically with small changes in weights (Schmidgall & Hays (2022b)). These perturbations become more dramatic when weights are changing simultaneously and independently.…”
Section: Discussion and Future Workmentioning
confidence: 92%
“…There have also been many previous contributions toward neuromodulated plasticity in non-spiking Artificial Neural Networks (ANNs) (62)(63)(64)(65)(66). However, plastic ANNs have been demonstrated to struggle maintaining functional stability across time due to their continuous nature which causes synapses to be in a constant state of change (41). The effect of this instability was shown to not disturb the performance as significantly in plastic SNNs as it did on plastic ANNs.…”
Section: Discussionmentioning
confidence: 99%
“…Learning how to learn online with synaptic plasticity through gradient descent. In learning applications with networks of spiking neurons, synaptic plasticity rules have historically been optimized through black-box optimization techniques such as evolutionary strategies (39)(40)(41), genetic algorithms (42,43), or Bayesian optimization (44,45). This is because spiking dynamics are inherently non-differentiable, and non-differentiable computations prevent gradient descent from being harnessed for optimization.…”
Section: Learning In Network With Plastic Synapsesmentioning
confidence: 99%