2022
DOI: 10.48550/arxiv.2204.03410
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Incremental Prototype Tuning for Class Incremental Learning

Abstract: Class incremental learning has attracted much attention, but most existing works still continually fine-tune the entire representation model, inevitably resulting in much catastrophic forgetting. Instead of struggling to fight against such forgetting by replaying or distillation like most of the existing methods, we take a novel pre-train-and-prompt-tuning paradigm to sequentially learn new visual concepts based on a fixed semantic-rich pre-trained representation model. In detail, we incrementally prompt-tune … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 30 publications
(52 reference statements)
0
1
0
Order By: Relevance
“…There have been also emerging some other transformer-based incremental learning methods like (71; 27; 37), and other incremental prompting methods like (42; 65) 12 methods without prompting demand much more tuning on the whole network than the prompting methods like (76) and ours. Also, they keep applying traditional dependent learning paradigm to play the tug-of-war game.…”
Section: More Discussion On Related Workmentioning
confidence: 99%
“…There have been also emerging some other transformer-based incremental learning methods like (71; 27; 37), and other incremental prompting methods like (42; 65) 12 methods without prompting demand much more tuning on the whole network than the prompting methods like (76) and ours. Also, they keep applying traditional dependent learning paradigm to play the tug-of-war game.…”
Section: More Discussion On Related Workmentioning
confidence: 99%