2020
DOI: 10.48550/arxiv.2004.00176
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Knowledge as Priors: Cross-Modal Knowledge Generalization for Datasets without Superior Knowledge

Abstract: Cross-modal knowledge distillation deals with transferring knowledge from a model trained with superior modalities (Teacher) to another model trained with weak modalities (Student). Existing approaches require paired training examples exist in both modalities. However, accessing the data from superior modalities may not always be feasible. For example, in the case of 3D hand pose estimation, depth maps, point clouds, or stereo images usually capture better hand structures than RGB images, but most of them are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 47 publications
0
1
0
Order By: Relevance
“…In LwF [13], knowledge distillation regularization term is first introduced into the loss function to retain the knowledge learnt from the previous training data. Knowledge distillation refers to distilling knowledge from a model of a cumbersome teacher and infusing it to a model of a light student which applied extensively in teaching [23][24][25][26] and can contribute to a generalization of model [27][28][29]. As shown in [9], LwF prefers to process new classes in the inference phase.…”
Section: Class Incremental Learningmentioning
confidence: 99%
“…In LwF [13], knowledge distillation regularization term is first introduced into the loss function to retain the knowledge learnt from the previous training data. Knowledge distillation refers to distilling knowledge from a model of a cumbersome teacher and infusing it to a model of a light student which applied extensively in teaching [23][24][25][26] and can contribute to a generalization of model [27][28][29]. As shown in [9], LwF prefers to process new classes in the inference phase.…”
Section: Class Incremental Learningmentioning
confidence: 99%