2020
DOI: 10.48550/arxiv.2006.16331
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Asymmetric metric learning for knowledge transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
23
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(25 citation statements)
references
References 0 publications
2
23
0
Order By: Relevance
“…The reason for not seeing an improvement with the positive sampler might be that, in the task of compatible training, the most significant objective is to train the new model to map images to same features as the old model, rather than constraining the distance between positive pairs. Such a phenomenon has also been observed by the related work (Budnik & Avrithis, 2020). As shown in Table 3 of (Budnik & Avrithis, 2020), the method of "regression" (forces the old feature and new feature of the same image similar using cosine similarity) wins the asymmetric test (cross-model test), compared to other methods that regularize the positive pairs.…”
Section: A4 Ablation Studiessupporting
confidence: 60%
See 4 more Smart Citations
“…The reason for not seeing an improvement with the positive sampler might be that, in the task of compatible training, the most significant objective is to train the new model to map images to same features as the old model, rather than constraining the distance between positive pairs. Such a phenomenon has also been observed by the related work (Budnik & Avrithis, 2020). As shown in Table 3 of (Budnik & Avrithis, 2020), the method of "regression" (forces the old feature and new feature of the same image similar using cosine similarity) wins the asymmetric test (cross-model test), compared to other methods that regularize the positive pairs.…”
Section: A4 Ablation Studiessupporting
confidence: 60%
“…As shown in Table 6 of Appendix, similar results can be observed with or without a pre-sampler. The reason for the similar results might be that the most significant objective of compatible training is to train the new model to map images to same features as the old model rather than constraining the distance between positive pairs, according to (Budnik & Avrithis, 2020). Note that the negative pair is built by the new and old features of inter-class instances in the mini-batch, i.e., the intra-class samples would not be treated as the negatives in our instance discrimination-like compatible training.…”
Section: Revisit Conventional Compatible Trainingmentioning
confidence: 96%
See 3 more Smart Citations