2022
DOI: 10.1007/978-3-031-19809-0_23
|View full text |Cite
|
Sign up to set email alerts
|

DAS: Densely-Anchored Sampling for Deep Metric Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 54 publications
0
1
0
Order By: Relevance
“…Influenced by XBM, some work has embedded the memory bank into the network to adapt various tasks [40], [86]- [107]. To enhance the diversity of semantic differences, Liu et al [99] construct a memory bank using historical intra-class embedding representations. Motivated by the issue of slow feature drift, Wang et al [100] propose the quantization code memory bank to lower feature drift to use historical feature representation effectively.…”
Section: Aspect Of Historical Typementioning
confidence: 99%
“…Influenced by XBM, some work has embedded the memory bank into the network to adapt various tasks [40], [86]- [107]. To enhance the diversity of semantic differences, Liu et al [99] construct a memory bank using historical intra-class embedding representations. Motivated by the issue of slow feature drift, Wang et al [100] propose the quantization code memory bank to lower feature drift to use historical feature representation effectively.…”
Section: Aspect Of Historical Typementioning
confidence: 99%
“…These methods use linear interpolation to generate new samples, leading to false negatives and positives due to the difficulties in determining class boundaries on the manifold structure. DAS [21] and IAA [5] try to solve this issue by adjusting features slightly and adding extra examples during midtraining of the model, aiming to make sure that new examples still belong to the same class. Despite these attempts, completely removing this problem remains challenging.…”
Section: Hard Sample Miningmentioning
confidence: 99%
“…Approaches like hard sample mining and generation have been proposed to aid network convergence by introducing a substantial gradient [1,41,12,34,49,48,17,18]. Recently, an advancement in generating appropriate hard samples involves creating supplementary training data [17,38,5,21]. This is primarily * Corresponding to:csong@zjsu.edu.cn achieved through linear interpolation, a prevalent method for generating synthetic samples.…”
Section: Introductionmentioning
confidence: 99%