2020
DOI: 10.1038/s41467-020-17866-2
|View full text |Cite
|
Sign up to set email alerts
|

Brain-inspired replay for continual learning with artificial neural networks

Abstract: Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
309
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 324 publications
(314 citation statements)
references
References 50 publications
4
309
1
Order By: Relevance
“…We simulated the consolidation of category knowledge in a large-scale neural network model which closely mirrors the form and function of the human ventral visual system, by replaying prototypical representations thought to be formed and initiated by the hippocampus. The notion that replay might be generative in nature has been suggested by smaller simulations [30, 31], however our results using a realistic model of the visual brain represent the most compelling evidence to date that humans are unlikely to replay experiences verbatim during rest and sleep to improve category knowledge, and are more likely to replay novel, imagined instances instead. In addition, the large number (117,000) of high-resolution complex naturalistic images we used for training in this experiment reflected real-world learning and facilitated the extraction of gist-like features.…”
Section: Discussionmentioning
confidence: 54%
See 1 more Smart Citation
“…We simulated the consolidation of category knowledge in a large-scale neural network model which closely mirrors the form and function of the human ventral visual system, by replaying prototypical representations thought to be formed and initiated by the hippocampus. The notion that replay might be generative in nature has been suggested by smaller simulations [30, 31], however our results using a realistic model of the visual brain represent the most compelling evidence to date that humans are unlikely to replay experiences verbatim during rest and sleep to improve category knowledge, and are more likely to replay novel, imagined instances instead. In addition, the large number (117,000) of high-resolution complex naturalistic images we used for training in this experiment reflected real-world learning and facilitated the extraction of gist-like features.…”
Section: Discussionmentioning
confidence: 54%
“…However, an alternative approach which can address these outstanding questions, is to harness the recent considerable advances in artificial neural networks. While replay has been previously simulated in smaller-scale networks [29-31], in order to make direct comparisons with the human brain, we simulated learning and replay in a deep convolutional neural network (DCNN) which mirrors the brain’s layered structure and representations [32, 33] and approaches human-level recognition performance [34]. To simulate new learning in humans, we took a network which has already been trained to successfully categorise 1000 categories of objects in photographs, akin to a fully functional visual system in humans, and tasked it with learning 10 novel categories.…”
Section: Introductionmentioning
confidence: 99%
“…In this regard, NSD is a resource that will contribute to a virtuous cycle of competition between models derived from biological and artificial intelligence. In principle, brain-optimized networks might prove to be more effective for solving computer vision tasks (Cox and Dean, 2014; Fong et al, 2018; Toneva and Wehbe, 2019; van de Ven et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…149 It should be noted that in most these cases the generative model exists outside the network itself, which is not biologically realistic in the case of the brain, although there are some exceptions. 150 What of those cases where the network itself acts the generative model? This is much closer to the case of the dreaming brain.…”
Section: Evidence From Deep Learningmentioning
confidence: 99%