2020
DOI: 10.1609/aaai.v34i07.6715
|View full text |Cite
|
Sign up to set email alerts
|

Look One and More: Distilling Hybrid Order Relational Knowledge for Cross-Resolution Image Recognition

Abstract: In spite of great success in many image recognition tasks achieved by recent deep models, directly applying them to recognize low-resolution images may suffer from low accuracy due to the missing of informative details during resolution degradation. However, these images are still recognizable for subjects who are familiar with the corresponding high-resolution ones. Inspired by that, we propose a teacher-student learning approach to facilitate low-resolution image recognition via hybrid order relational knowl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…The dataset used is down-sampled to reduce the resolution to 16×16, and the classification loss is classic cross-entropy loss.The experimental results are shown in Fig. 6, our approach shows good adaptability on more challenging low-resolution image classification task, which is 1.31% higher than the current best method HORKD [64]. In addition, HORKD needs to consume more computing resources due to the introduction of the assistant network.…”
Section: F Adaptability Analysismentioning
confidence: 92%
See 2 more Smart Citations
“…The dataset used is down-sampled to reduce the resolution to 16×16, and the classification loss is classic cross-entropy loss.The experimental results are shown in Fig. 6, our approach shows good adaptability on more challenging low-resolution image classification task, which is 1.31% higher than the current best method HORKD [64]. In addition, HORKD needs to consume more computing resources due to the introduction of the assistant network.…”
Section: F Adaptability Analysismentioning
confidence: 92%
“…Moreover, their models have more parameters, which will lead to a significant increase in the computing cost for inference. It is worth mentioning that whether it is VLRR [65], SKD [62], HORKD [64] or LRFRW [71] in the distillation process, high-resolution images corresponding to low-resolution faces are necessary to provide more information, but such highresolution images are not always easy to obtain, our approach only uses low-resolution face images for training, which adopts real-world application scenarios.…”
Section: F Adaptability Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…[18] proposed that the direct transfer from private HR to wild LR may be difficult, and used public HR and LR as a bridge to distill and compress knowledge. In order to fully transfer HR recognition knowledge to LR network, [20] not only focuses on the first-order knowledge learning between points, but also considers high-order distillation. The knowledge of various order relationships is extracted from the teacher network as a supervision signal for student network.…”
Section: Lr Face Recognitionmentioning
confidence: 99%
“…There are also works that use knowledge distillation [16,17,18,19,20] to make an LR network mimic a HR network which is obtained under a rich HR training set. With paired LR and HR images of the same identity, the LR network is supervised by both class label and the soft target of HR network, corresponding to classification loss and distillation loss, respectively.…”
Section: Introductionmentioning
confidence: 99%