2020
DOI: 10.48550/arxiv.2002.08165
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Using Hindsight to Anchor Past Knowledge in Continual Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
27
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(28 citation statements)
references
References 0 publications
1
27
0
Order By: Relevance
“…The fifth setting is on CIFAR10, where we use the commonly used setting from GMED [14], which divides CI-FAR10 into 5 tasks equally. And we compare with online continual learning methods: AGEM [5], BGD [36], GEM [22], GSS-Greedy [3], HAL [4], ER [31], MIR [1], and GMED [14].…”
Section: Methodsmentioning
confidence: 99%
“…The fifth setting is on CIFAR10, where we use the commonly used setting from GMED [14], which divides CI-FAR10 into 5 tasks equally. And we compare with online continual learning methods: AGEM [5], BGD [36], GEM [22], GSS-Greedy [3], HAL [4], ER [31], MIR [1], and GMED [14].…”
Section: Methodsmentioning
confidence: 99%
“…Rehearsal-based methods [7,8,16] construct a data buffer to save samples from older tasks to train with data from the current task. Based on this simple yet effective idea, many recent methods improve upon it by involv-ing additional knowledge distillation penalties [3,6,46,57], or leveraging self-supervised learning techniques [4,42]. Albeit its simplicity in concept, rehearsal-based methods achieve state-of-the-art performance on various benchmarks [32,40].…”
Section: Related Workmentioning
confidence: 99%
“…Continual Learning Following Lange et al [2019], we review the related methods to alleviate catastrophic forgetting in continual learning in three different but overlapping categories. Replaybased methods store and rephrase a memory of the examples or knowledge learned so far [Rebuffi et al, 2016, Lopez-Paz and Ranzato, 2017, Shin et al, 2017, Riemer et al, 2018, Rios and Itti, 2018, Zhang et al, 2019, Chaudhry et al, 2020. Regularization-based methods constrain the parameter updates while learning new tasks to preserve previous knowledge.…”
Section: Related Workmentioning
confidence: 99%