2020 IEEE International Conference on Image Processing (ICIP) 2020
DOI: 10.1109/icip40778.2020.9190651
|View full text |Cite
|
Sign up to set email alerts
|

Triplet Distillation For Deep Face Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 34 publications
(16 citation statements)
references
References 15 publications
0
16
0
Order By: Relevance
“…In general, the main usage of knowledge distillation is to transfer the knowledge from several networks to another one (Hinton, Dean, and Vinyals 2014;Feng et al 2020;David et al 2016). Our work is closely related to the following very recent knowledge distillation work for recognition.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“…In general, the main usage of knowledge distillation is to transfer the knowledge from several networks to another one (Hinton, Dean, and Vinyals 2014;Feng et al 2020;David et al 2016). Our work is closely related to the following very recent knowledge distillation work for recognition.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“…Previous works [14], [21] indicated that utilizing tripletbased learning is beneficial for learning discriminative face embeddings. Let x ∈ X represents a batch of training samples, and f (x) is the face embeddings obtained from the face recognition model.…”
Section: ) Self-restrained Triplet Lossmentioning
confidence: 99%
“…Several effective face recognition models are trained using methods involving knowledge distillation [34]. In [35], an enhanced version of triplet loss is proposed, named triplet distillation, which exploits the capability of a teacher model to transfer the similarity information to a small model by adaptively varying the margin between positive and negative pairs. In [36], the authors present a novel model compres-sion approach based on the student-teacher paradigm for face recognition applications, which consists of a training teacher network at greater image resolution while student networks are trained at lower image resolutions than that of teacher network.…”
Section: Related Workmentioning
confidence: 99%