2021
DOI: 10.3390/s21082792
|View full text |Cite
|
Sign up to set email alerts
|

Memory-Replay Knowledge Distillation

Abstract: Knowledge Distillation (KD), which transfers the knowledge from a teacher to a student network by penalizing their Kullback–Leibler (KL) divergence, is a widely used tool for Deep Neural Network (DNN) compression in intelligent sensor systems. Traditional KD uses pre-trained teacher, while self-KD distills its own knowledge to achieve better performance. The role of the teacher in self-KD is usually played by multi-branch peers or the identical sample with different augmentation. However, the mentioned self-KD… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 25 publications
0
16
0
Order By: Relevance
“…The replay-based methods focusing on memory optimization [28,45] are closely related to our work. [28] proposed a bilevel optimization framework to distill the current new class data into exemplars before discarding them.…”
Section: Related Workmentioning
confidence: 98%
See 4 more Smart Citations
“…The replay-based methods focusing on memory optimization [28,45] are closely related to our work. [28] proposed a bilevel optimization framework to distill the current new class data into exemplars before discarding them.…”
Section: Related Workmentioning
confidence: 98%
“…Replay-based methods assume there is a clear memory budget allowing a handful of old-class exemplars in the memory. Exemplars can be used to re-train the model in each new phase [13,18,27,28,37,45,46]. This re-training usually contains two steps: one step of training the model on all new class data and old class exemplars, and one step of finetuning the model with a balanced subset (i.e., using an equal number of samples per class) [13,18,26,27,48].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations