2020
DOI: 10.1007/978-3-030-58598-3_28
|View full text |Cite
|
Sign up to set email alerts
|

REMIND Your Neural Network to Prevent Catastrophic Forgetting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
97
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 162 publications
(98 citation statements)
references
References 58 publications
1
97
0
Order By: Relevance
“…Many replay algorithms for DNNs store past observations, and during replay internally convert observations into a suitable feature space using a previously learned transformation, such as a convolutional network (e.g., Mnih et al, 2015). But the benefits of directly storing internal representations for replay are increasingly acknowledged (Kapturowski et al, 2019;Iscen et al, 2020;Caccia et al, 2019;Hayes et al, 2019;van de Ven et al, 2020;Pellegrini et al, 2019;Hayes et al, 2021). Amongst others, storing internal representations is often more memory efficient (Iscen et al, 2020;Hayes et al, 2019), while observations can still be recreated from compressed internal representations if they are needed (van de Ven et al, 2020).…”
Section: Replay Can Reflect State Representationsmentioning
confidence: 99%
“…Many replay algorithms for DNNs store past observations, and during replay internally convert observations into a suitable feature space using a previously learned transformation, such as a convolutional network (e.g., Mnih et al, 2015). But the benefits of directly storing internal representations for replay are increasingly acknowledged (Kapturowski et al, 2019;Iscen et al, 2020;Caccia et al, 2019;Hayes et al, 2019;van de Ven et al, 2020;Pellegrini et al, 2019;Hayes et al, 2021). Amongst others, storing internal representations is often more memory efficient (Iscen et al, 2020;Hayes et al, 2019), while observations can still be recreated from compressed internal representations if they are needed (van de Ven et al, 2020).…”
Section: Replay Can Reflect State Representationsmentioning
confidence: 99%
“…In computer vision, most of continual learning strategies applied on large-scale datasets use rehearsal learning: a limited amount of the training data of old classes is kept during training [55]. This data is usually kept in raw form (e.g., pixels) [54,5,10] but can also be compressed [29,34], or trimmed [18] to reduce memory overhead; others store only a model to generate new samples of past classes [35,60,44]. In addition, most approaches aim at limiting the changes in the model when new classes are learned.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, the system needs to store the original images as well as a GAN to generate images and such a process is memory demanding. The REMIND approach in a neural network that can be used to prevent catastrophic forgetting [27] is another Convolutional Neural Network (CNN)-based model that tackles the problem of catastrophic forgetting using rehearsal. Unlike the prior work where raw images of the previous classes are stored, these methods store quantized tensors for rehearsal, which is less memory demanding.…”
Section: Related Work and Motivationmentioning
confidence: 99%