2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00655
|View full text |Cite
|
Sign up to set email alerts
|

SoftTriple Loss: Deep Metric Learning Without Triplet Sampling

Abstract: Distance metric learning (DML) is to learn the embeddings where examples from the same class are closer than examples from different classes. It can be cast as an optimization problem with triplet constraints. Due to the vast number of triplet constraints, a sampling strategy is essential for DML. With the tremendous success of deep learning in classifications, it has been applied for DML. When learning embeddings with deep neural networks (DNNs), only a mini-batch of data is available at each iteration. The s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
256
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 334 publications
(259 citation statements)
references
References 22 publications
3
256
0
Order By: Relevance
“…[3], and DREML [5]) with a large margin. Impressively, even the single-head version of HDhE (Sh-HDhE) having only the pruned head (Last layer is Res4_1) shows a competitive performance to the latest algorithms (SCHM [8], SoftTriple [9], and Proxy-Anchor [10]). Furthermore, the proposed HDhE gives a performance improvement of over 4% of Recall@1 (CUB-200) from the single-head version of HDhE (Sh-HDhE).…”
Section: E Comparison To State-of-the-art Methodsmentioning
confidence: 99%
“…[3], and DREML [5]) with a large margin. Impressively, even the single-head version of HDhE (Sh-HDhE) having only the pruned head (Last layer is Res4_1) shows a competitive performance to the latest algorithms (SCHM [8], SoftTriple [9], and Proxy-Anchor [10]). Furthermore, the proposed HDhE gives a performance improvement of over 4% of Recall@1 (CUB-200) from the single-head version of HDhE (Sh-HDhE).…”
Section: E Comparison To State-of-the-art Methodsmentioning
confidence: 99%
“…This idea is an interesting extension of instance-to-class similarity relationship modelling. In this approach, classes are represented by proxies [34], [39]. It is interesting in that more flexibility is given: (1) The number of proxies can be smaller than the number of training classes, in which case multiple classes are assigned to the same proxy.…”
Section: Design Of Loss Functions For Learning Discriminative Deep Representationsmentioning
confidence: 99%
“…This optimizes a distance metric-based or KL-divergence loss on the graph pairs or triplets [2,6,25] necessitating vast training pairs or triplets to capture the entire global characteristics. One way to avoid explicit pair or triplet generation utilizes efficient batch-wise learning via optimizing classification loss [35,38]. However, pairwise node matching in a batch-wise setting is problematic due to graph size variability.…”
Section: Cross-global Attention Node Matchingmentioning
confidence: 99%