2022
DOI: 10.48550/arxiv.2207.08356
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Knowledge Representation with Meta Knowledge Distillation for Single Image Super-Resolution

Abstract: Although the deep CNN-based super-resolution methods have achieved outstanding performance, their memory cost and computational complexity severely limit their practical employment. Knowledge distillation (KD), which can efficiently transfer knowledge from a cumbersome network (teacher) to a compact network (student), has demonstrated its advantages in some computer vision applications. The representation of knowledge is vital for knowledge transferring and student learning, which is generally defined in hand-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 49 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?