2023
DOI: 10.1016/j.neunet.2023.04.010
|View full text |Cite
|
Sign up to set email alerts
|

Imitating the oracle: Towards calibrated model for class incremental learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 40 publications
0
8
0
Order By: Relevance
“…The training data distribution deviates significantly from the target test distribution with growing incremental tasks, which motivates us to explore a novel data distribution model in CIL from the discrepancy perspective. Moreover, inspired by the mixup (Zhang et al 2018(Zhang et al , 2021, recent research communities also applied this training strategy in CIL (Mi et al 2020;Bang et al 2021;Zhu et al 2021;Zhou et al 2022). We extend our modeling framework to mixup and find that discrepancy remains as in Corollary 1 and Fig.…”
Section: Introductionmentioning
confidence: 80%
“…The training data distribution deviates significantly from the target test distribution with growing incremental tasks, which motivates us to explore a novel data distribution model in CIL from the discrepancy perspective. Moreover, inspired by the mixup (Zhang et al 2018(Zhang et al , 2021, recent research communities also applied this training strategy in CIL (Mi et al 2020;Bang et al 2021;Zhu et al 2021;Zhou et al 2022). We extend our modeling framework to mixup and find that discrepancy remains as in Corollary 1 and Fig.…”
Section: Introductionmentioning
confidence: 80%
“…While this technique aids in the retention of old information, it can be further improved by leveraging the knowledge of the distribution of classes in the feature space. Accordingly, IL2A [56] proposed storing covariance matrices to retain class variations, but this approach can be memory intensive. SSRE [58] proposed a dynamic structure reorganization strategy to retain and transfer knowledge between tasks along with a prototype selection mechanism that utilizes an up-sampling technique of non-augmented class-mean prototypes.…”
Section: Related Work 21 Incremental Learningmentioning
confidence: 99%
“…As F t θ gets updated continually, the actual feature distributions of old classes drift away from their original distributions. To mitigate this drift we incorporate a feature-level knowledge distillation (L KD ) [57,56] that attempts to align the feature spaces of the current and the previous models.…”
Section: Knowledge Distillationmentioning
confidence: 99%
See 2 more Smart Citations