2019
DOI: 10.48550/arxiv.1909.08383
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A continual learning survey: Defying forgetting in classification tasks

Matthias De Lange,
Rahaf Aljundi,
Marc Masana
et al.

Abstract: Artificial neural networks thrive in solving the classification problem for a particular rigid task, where the network resembles a static entity of knowledge, acquired through generalized learning behaviour from a distinct training phase. However, endeavours to extend this knowledge without targeting the original task usually result in a catastrophic forgetting of this task. Continual learning shifts this paradigm towards a network that can continually accumulate knowledge over different tasks without the need… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
146
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 82 publications
(147 citation statements)
references
References 49 publications
1
146
0
Order By: Relevance
“…Although humans and animals are able to continuously acquire new information over their lifetimes without catastrophically forgetting prior knowledge, artificial neural net-works lack these capabilities (Parisi et al, 2019;De Lange et al, 2019). Replay of previous experiences or memories in humans has been identified as the primary mechanism for overcoming forgetting and enabling continual knowledge acquisition (Walker and Stickgold, 2004).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Although humans and animals are able to continuously acquire new information over their lifetimes without catastrophically forgetting prior knowledge, artificial neural net-works lack these capabilities (Parisi et al, 2019;De Lange et al, 2019). Replay of previous experiences or memories in humans has been identified as the primary mechanism for overcoming forgetting and enabling continual knowledge acquisition (Walker and Stickgold, 2004).…”
Section: Discussionmentioning
confidence: 99%
“…Replay includes contents from both new and old memories networks Parisi et al, 2019;De Lange et al, 2019;, we provide the first comprehensive review that integrates and identifies the gaps between replay in these two fields. While it is beyond the scope of this paper to review everything known about the biology of replay, we highlight the salient differences between known biology and today's machine learning systems to help biologists test hypotheses and help machine learning researchers improve algorithms.…”
Section: Replay Mechanism Role In Brain Use In Deep Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Following Lange et al (2019), we divide continual learning methodologies into three different categories. We have regularisation-based methods (Kirkpatrick et al, 2017;Aljundi et al, 2018;Zenke et al, 2017;Kolouri et al, 2019) that regularize the neural network parameters to not change drastically from those learned on previous tasks.…”
Section: Related Workmentioning
confidence: 99%
“…In other words, when the system tried to optimize the loss function corresponding to the second task, it moved to a region of parameter space where it was not able to solve the first task. Inspired by human and animal learning [10][11][12][13][14][15][16] or even by the behavior of physical systems [17][18][19][20][21][22][23][24], modern approaches to this problem try to constrain the change of the parameters during training of the second task to avoid forgetting the first one [25][26][27][28][29][30][31][32][33][34][35].…”
mentioning
confidence: 99%