2022
DOI: 10.1007/978-3-031-19775-8_37
|View full text |Cite
|
Sign up to set email alerts
|

Teaching Where to Look: Attention Similarity Knowledge Distillation for Low Resolution Face Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…Convolutional neural networks (CNN) have been widely applied to a variety of classification tasks, including 1D signal analysis [ 11 , 49 , 50 ] and 2D imaging analysis [ 51 , 52 , 53 ]. CNN has the advantages of extracting spatial information from the inputs by sliding the fixed size of kernels with trainable weights.…”
Section: Methodsmentioning
confidence: 99%
“…Convolutional neural networks (CNN) have been widely applied to a variety of classification tasks, including 1D signal analysis [ 11 , 49 , 50 ] and 2D imaging analysis [ 51 , 52 , 53 ]. CNN has the advantages of extracting spatial information from the inputs by sliding the fixed size of kernels with trainable weights.…”
Section: Methodsmentioning
confidence: 99%
“…They also selected the feature map position that requires knowledge distillation before ReLU to retain both positive and negative values, allowing the student to learn more information. Building upon previous work, Shin et al [36] employed attention similarity to address the recognition problem in low-resolution (LR) facial images. They used attention map spectra for knowledge extraction and transfer, achieving high efficiency and simplicity.…”
Section: Transfer Learningmentioning
confidence: 99%
“…However, these datasets primarily consist of high-resolution (HR) images and lack specifically provided lowresolution (LR) images for the face. In this regard, we adopt the LR image generation protocol proposed in [36,42] to generate LR face images. This involves employing bilinear interpolation to down-sample the HR face images by factors of 2×, 4×, and 8×, followed by the application of Gaussian blur to construct LR images.…”
Section: Datasetsmentioning
confidence: 99%
“…This idea was first proposed in [11] to transfer knowledge from a high-performing but computationally expensive teacher network into a simple student network. Recent studies [12,13,14,15] have shown the potential of this approach in solving recognition problems in low-resolution domains. For instance, Zhu et al [12] addressed the low-resolution object recognition problem with the teacher-student learning paradigm.…”
Section: Introductionmentioning
confidence: 99%
“…Authors in [13,14] developed efficient low-resolution face recognition models with very low computational cost by distilling the most informative facial features from teacher to student stream. More recently, [15] performed an attention similarity knowledge distillation. Instead of the feature map, they transferred attention maps obtained from the teacher network into a student network to boost LR face recognition performance.…”
Section: Introductionmentioning
confidence: 99%