2019
DOI: 10.48550/arxiv.1903.07864
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Class-incremental Learning via Deep Model Consolidation

Abstract: Deep neural networks (DNNs) often suffer from "catastrophic forgetting" during incremental learning (IL) -an abrupt degradation of performance on the original set of classes when the training objective is adapted to a newly added set of classes. Existing IL approaches tend to produce a model that is biased towards either the old classes or new classes, unless with the help of exemplars of the old data. To address this issue, we propose a classincremental learning paradigm called Deep Model Consolidation (DMC),… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(23 citation statements)
references
References 50 publications
0
23
0
Order By: Relevance
“…In [6] the current model distills knowledge from all previous model snapshots, of which a pruned version is saved. Deep Model Consolidation (DMC) [40] proposes the idea to train a separate model for the new classes, and then combine the two models (for old and new data, respectively) via double distillation objective. The two models are consolidated via publicly available unlabeled auxiliary data.…”
Section: Related Workmentioning
confidence: 99%
“…In [6] the current model distills knowledge from all previous model snapshots, of which a pruned version is saved. Deep Model Consolidation (DMC) [40] proposes the idea to train a separate model for the new classes, and then combine the two models (for old and new data, respectively) via double distillation objective. The two models are consolidated via publicly available unlabeled auxiliary data.…”
Section: Related Workmentioning
confidence: 99%
“…Comparison with existing methods: We extensively compare iTAML with several popular incremental learning methods, including Elastic Weight Consolidation [13], Riemannian Walk (RWalk) [3], Learning without Forgetting (LwF) [15], Synaptic Intelligence (SI) [31], Memory Aware Synapses (MAS) [1], Deep Model Consolidation (DMC) [32], Incremental Classifier and Representation Learning (iCARL) [23], Random Path Selection network (RPS-net) [21] and Bias Correction Method (BiC) [30]. We also compare against Fixed representations (FixedRep) and Fine tuning (FineTune).…”
Section: Results and Comparisonsmentioning
confidence: 99%
“…Other models seek to leverage extra knowledge including unlabeled data [28], [72] (which are independent from the targeted tasks) or biases (due to imbalanced distributions) between previous and current tasks to further enhance generalization [15], [63].…”
Section: Related Workmentioning
confidence: 99%