2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00330
|View full text |Cite
|
Sign up to set email alerts
|

Proxy Anchor Loss for Deep Metric Learning

Abstract: Existing metric learning losses can be categorized into two classes: pair-based and proxy-based losses. The former class can leverage fine-grained semantic relations between data points, but slows convergence in general due to its high training complexity. In contrast, the latter class enables fast and reliable convergence, but cannot consider the rich datato-data relations. This paper presents a new proxy-based loss that takes advantages of both pair-and proxy-based methods and overcomes their limitations. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
291
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 309 publications
(314 citation statements)
references
References 34 publications
1
291
0
Order By: Relevance
“…We compared many state-of-the-art alternatives, including four ensemble methods, i.e., ABE (Attention-Based Ensem-ble) [28], DREML (Deep Randomized Ensemble for Metric Learning) [27], DCES (Divide-and-Conquer Embedding Space) [29], and A-BIER (Adversarial-Boosting Independent Embeddings Robustly) [11], as well as typical methods, e.g., Triplet [13], HTL (Hierarchical Triplet Loss) [15] and Margin [20]. Besides, proxy-based approaches like Proxy-NCA [18] and Proxy-Anchor [19] while MS (Multi-Similarity loss) [17] and its variant DR-MS (Direction Regularized MS) [33], EE-MS (Embedding Expansion MS) [21] are compared. In addition, the records of ALA (Adaptive Learnable Assessment) [22] and HDML (Hardness-aware Deep Metric Learning) [23] are reported.…”
Section: Results and Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…We compared many state-of-the-art alternatives, including four ensemble methods, i.e., ABE (Attention-Based Ensem-ble) [28], DREML (Deep Randomized Ensemble for Metric Learning) [27], DCES (Divide-and-Conquer Embedding Space) [29], and A-BIER (Adversarial-Boosting Independent Embeddings Robustly) [11], as well as typical methods, e.g., Triplet [13], HTL (Hierarchical Triplet Loss) [15] and Margin [20]. Besides, proxy-based approaches like Proxy-NCA [18] and Proxy-Anchor [19] while MS (Multi-Similarity loss) [17] and its variant DR-MS (Direction Regularized MS) [33], EE-MS (Embedding Expansion MS) [21] are compared. In addition, the records of ALA (Adaptive Learnable Assessment) [22] and HDML (Hardness-aware Deep Metric Learning) [23] are reported.…”
Section: Results and Analysismentioning
confidence: 99%
“…Hence, we obtain the consensus-aware adaptive ensemble, which are employed for yielding the embedding representations of test samples across a wide range of classes. Following [19,36,37], we use Recall@h(R@h) and mAP@R for image retrieval task and Normalized Mutual Information (NMI) [29] for clustering task. R@h is the ratio of queries which have at least one sample within the class of query in the leading h items retrieved by model, and mAP@R is a more strict criterion which considers the sequence of items.…”
Section: Objective Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…We then generate and maintain a set of L 2 -normalized proxies {p c } C c=0 , p c ∈ R d , where d is the dimension of the embeddings z, and we want data with confident pseudolabels to be be drawn close to proxies of the same class. We feed the pseudolabels and confidence values we obtained to a loss function adapted from [14]. Let…”
Section: Proposed Methodsmentioning
confidence: 99%
“…In this paper, we compare our results against state-of-theart semi-supervised algorithms SDEC [9] and PULLMH [5]; fully-supervised algorithms Proxy-Anchor [14], SoftTriple [19] and ProxyNCA++ [20]; and unsupervised algorithm DEC [21].…”
Section: Datasetsmentioning
confidence: 99%