2022
DOI: 10.1007/978-3-031-13324-4_47
|View full text |Cite
|
Sign up to set email alerts
|

Practical Recommendations for Replay-Based Continual Learning Methods

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…To solve this issue, there are currently three categories of approaches, i.e., replay methods, regularization methods, and parameter isolation methods. Replay involves periodically training on a subset of upstream task data, thereby retaining knowledge of previous tasks and balancing old and new information (Rebuffi et al 2017;Rolnick et al 2019;Liu et al 2020;Merlin et al 2022). However, storing and managing updtream task data pose challenges in terms of efficiency, particularly in the contemporary era of massive datasets (Schuhmann et al 2022;Li et al 2023).…”
Section: Fine-tuning Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…To solve this issue, there are currently three categories of approaches, i.e., replay methods, regularization methods, and parameter isolation methods. Replay involves periodically training on a subset of upstream task data, thereby retaining knowledge of previous tasks and balancing old and new information (Rebuffi et al 2017;Rolnick et al 2019;Liu et al 2020;Merlin et al 2022). However, storing and managing updtream task data pose challenges in terms of efficiency, particularly in the contemporary era of massive datasets (Schuhmann et al 2022;Li et al 2023).…”
Section: Fine-tuning Methodsmentioning
confidence: 99%
“…Finally, OLOR is incorporated into optimizers, thereby introducing negligible extra computational overhead. It also works well with popular optimizers such as Adam (Loshchilov and Hutter 2017;Guan 2023) and SGD (Keskar and Socher 2017), meeting specific needs under var-…”
Section: Introductionmentioning
confidence: 96%
See 2 more Smart Citations
“…Other strategies have been proposed that use various metrics to choose more representative elements for the memory [10,16,17,2]. Some research has focused on the impact of hyperparameters on certain methods [33] or the effect of rehearsal methods on loss functions [46]. Other studies have explored methods for selecting elements from the memory, such as selecting elements based on how much their loss would be affected [1] or using a ranking based on the importance of preserving prior knowledge [20].…”
Section: Related Workmentioning
confidence: 99%