2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01046
|View full text |Cite
|
Sign up to set email alerts
|

LoOp: Looking for Optimal Hard Negative Embeddings for Deep Metric Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…InfoNCE loss was identified to have the hardnessaware property, which is critical for optimization [64] and preventing collapse by instance de-correlation [1]. [15], [34], [36], [49], [62], [66], [69] have demonstrated that hard negative samples mining strategies can be beneficial for better performance over the baselines. Notably, [65] identified CL form alignment and uniformity of feature space which benefits downstream tasks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…InfoNCE loss was identified to have the hardnessaware property, which is critical for optimization [64] and preventing collapse by instance de-correlation [1]. [15], [34], [36], [49], [62], [66], [69] have demonstrated that hard negative samples mining strategies can be beneficial for better performance over the baselines. Notably, [65] identified CL form alignment and uniformity of feature space which benefits downstream tasks.…”
Section: Related Workmentioning
confidence: 99%
“…In the view of optimization, the hardness-aware property puts more weight into optimizing negative pairs that have high similarities. This way is influenced by hard examples mining and has proven to be effective [4], [36], [49], [62], [66], [72].…”
Section: B Hardness-aware Property In Dimclmentioning
confidence: 99%
“…Most of these methods ignored inter-class diversity and were based on costly hard mining strategies. Further, Vasudeva et al [30] looked for optimal hard mining. In addition, Xuan et al [31] showed the importance of intra-class variance in learning embedding.…”
Section: Related Workmentioning
confidence: 99%
“…Deep metric learning aims to develop a similarity metric superior to traditional Euclidean distance by focusing on the derivation of high-dimensional data representations [1]. Contemporary research underscores the criticality of designing batch samplers [2][3][4] and employing online triplet mining [5][6][7][8], with hard negative mining schemes [9][10][11] traditionally serving as the preferred choice, proving their crucial role in significantly refining the similarity metric in deep metric learning. For an understanding of key concepts related to hard example mining and Batch Samplers, see the Section 1.…”
Section: Introductionmentioning
confidence: 99%
“…Challenge 2: Online Triplet Mining overlooks the issue of excessively loose clusters caused by outliers serving as anchors. The Online Triplet Mining (OTM) scheme, proposed by Triplet Loss [5], operates on triplets of examples (anchor, positive, negative) and generally aims to minimize the distance between the anchor and the positive example while maximizing the distance between the anchor and the negative example within the mini-batch [5,7,8,25]. Nonetheless, when the anchor is an outlier, it can lead to overly loose clusters, which are not preferable in deep learning due to their potential detriment to model classification.…”
Section: Introductionmentioning
confidence: 99%