2019
DOI: 10.48550/arxiv.1903.08671
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Gradient based sample selection for online continual learning

Abstract: A continual learning agent learns online with a non-stationary and never-ending stream of data. The key to such learning process is to overcome the catastrophic forgetting of previously seen data, which is a well known problem of neural networks. To prevent forgetting, a replay buffer is usually employed to store the previous data for the purpose of rehearsal. Previous works often depend on task boundary and i.i.d. assumptions to properly select samples for the replay buffer. In this work, we formulate sample … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(33 citation statements)
references
References 7 publications
0
33
0
Order By: Relevance
“…As such, most of the solutions to the online continual learning problem rely on the use of a buffer formed of previous memories, these memories are retrieved when learning new samples. A good body of the online continual learning works (Borsos et al, 2020;Chaudhry et al, 2019;Aljundi et al, 2019) proposes solutions to select which samples should be stored. Alternatively, (Caccia et al, 2019) investigate which samples should be retrieved.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…As such, most of the solutions to the online continual learning problem rely on the use of a buffer formed of previous memories, these memories are retrieved when learning new samples. A good body of the online continual learning works (Borsos et al, 2020;Chaudhry et al, 2019;Aljundi et al, 2019) proposes solutions to select which samples should be stored. Alternatively, (Caccia et al, 2019) investigate which samples should be retrieved.…”
Section: Related Workmentioning
confidence: 99%
“…Anytime evaluation and computational constraints A critical component of continual learning systems particularly in the "online" setting is the ability to use the learner at any point (De Lange et al, 2019). Although most works in the online (one-pass through the data) setting report results (Lopez-Paz et al, 2017;Aljundi et al, 2019) throughout the stream, several prior works have reported the final accuracy as a proxy (Caccia et al, 2019;Shim et al, 2020). However a lack of anytime evaluation opens the possibility to exploit the metrics by proposing offline learning baselines which are inherently not compatible with anytime evaluation (Prabhu et al, 2020) and are not "online" learners.…”
Section: Evaluation Framework Considerationsmentioning
confidence: 99%
See 2 more Smart Citations
“…The majority of previous works focused on addressing the well-known catastrophic forgetting issue (Mc-Closkey and Cohen 1989). According to the mechanism of memory consolidation, current approaches are categorized into three types: (i) Experiential rehearsal-based approaches, which focus on replaying episodic memory (Robins 1995), and the core of which is to select representative samples or features from historical data (Rebuffi et al 2017;Aljundi et al 2019;Bang et al 2021). (ii) Distributed memory representation approaches (Fernando et al 2017;Mallya and Lazebnik 2018), which allocate individual networks for specific knowledge to avoid interference, represented by Progressive Neural Networks (PNN) (Rusu et al 2016).…”
Section: Related Workmentioning
confidence: 99%