2022
DOI: 10.1007/978-3-031-19806-9_23
|View full text |Cite
|
Sign up to set email alerts
|

FOSTER: Feature Boosting and Compression for Class-Incremental Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
42
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 131 publications
(53 citation statements)
references
References 29 publications
0
42
0
1
Order By: Relevance
“…Parameter-isolation-based methods increase the model parameters in each new incremental phase, to prevent knowledge forgetting caused by parameter overwritten. Some of them [19,38,44,47,48] proposed to progressively expand the size of the neural network to learn new coming data. Others [1,22,26,49] froze a part of network parameters (to maintain the old class knowledge) to alleviate the problem of knowledge overwriting.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Parameter-isolation-based methods increase the model parameters in each new incremental phase, to prevent knowledge forgetting caused by parameter overwritten. Some of them [19,38,44,47,48] proposed to progressively expand the size of the neural network to learn new coming data. Others [1,22,26,49] froze a part of network parameters (to maintain the old class knowledge) to alleviate the problem of knowledge overwriting.…”
Section: Related Workmentioning
confidence: 99%
“…A straightforward way to retain old class knowledge is keeping around a few old class exemplars in the memory and using them to re-train the model in subsequent phases. The number of exemplars is usually limited, e.g., 5 ∼ 20 exemplars per class [13,18,26,28,37,44,46,48,50], as the total memory in CIL strictly budgeted, e.g., 2k exemplars.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…To address the ODD benchmark, we revisit the state-of-the-art networks like vision transformer (ViT) Dosovitskiy et al (2020) with either fine-tuning or prompt-tuning strategies. For the first CDD benchmark that allows for the rehearsal on early encountered deeparts and LAION-5B conarts, we propose a sharing scheme to enable the failed general rehearsal-based methods like Foster Wang et al (2022a) to work again. For the second CDD benchmark that merely allows for the rehearsal on LAION conarts, we suggest a similar sharing method to fix the malfunctioned state-of-the-art exemplar-free methods like S-Prompts Wang et al (2022b).…”
Section: Introductionmentioning
confidence: 99%