2021
DOI: 10.1609/aaai.v35i11.17159
|View full text |Cite
|
Sign up to set email alerts
|

Online Class-Incremental Continual Learning with Adversarial Shapley Value

Abstract: As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption. While memory replay techniques have shown exceptional promise for this task of continual learning, the best method for selecting which buffered images to replay is still an open question. In this paper, we specifically focus on the online class-incremental setting where a model need… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 113 publications
(41 citation statements)
references
References 13 publications
0
41
0
Order By: Relevance
“…Alternatively, some sampling methods focus on better memory retrieval strategies that reduce forgetting, e.g. MIR [2], ASER [42], and GMED [22]. (iii) In other approaches, GDumb [36] proposed a degenerate solution to the problem of online learning ignoring the stream data and learning only on the memory samples.…”
Section: Related Workmentioning
confidence: 99%
“…Alternatively, some sampling methods focus on better memory retrieval strategies that reduce forgetting, e.g. MIR [2], ASER [42], and GMED [22]. (iii) In other approaches, GDumb [36] proposed a degenerate solution to the problem of online learning ignoring the stream data and learning only on the memory samples.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Herding [44], K-Means [15], OCS [57], InfoRS [49], RM [6], and GSS [5] aim to maximize diversity among samples selected for training with different metrics. MIR [3], ASER [48], and CLIB [27] rank the samples according to their informativeness and select the top-k. Lastly, balanced sampling [16,17,40] select samples such that an equal distribution of classes is selected for training. In our experiments, we only consider previous sampling strategies that are applicable to our setup and compare them against Naive.…”
Section: Dissecting Continual Learning Systemsmentioning
confidence: 99%
“…Similarity learning has emerged as a significant area of machine learning research with many real-world applications [11][12][13][14]. Most current research focuses on preserving exemplars of previous classes to remind the network of past knowledge by finding exemplars that are useful for knowledge retention [15][16][17]. Recent advances include techniques such as using supervised contrastive learning with nearest-class-mean to maximize the use of exemplars [18].…”
Section: Introductionmentioning
confidence: 99%