2022
DOI: 10.48550/arxiv.2206.01197
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Hard Negative Sampling Strategies for Contrastive Representation Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…49 Expanding on that idea, to enhance the quality of negative samples, we leverage the output predictions of the first-stage model and focus on sampling those hard negative labels which have slightly high probabilities of being positive, and we refer to this method as hard negative sampling. 50,51 The concept of hard negative sampling in this work is illustrated in Figure 4. Since only the reagents and solvents with high enough probabilities (larger than 0.3) would be ranked in the second-stage model, it should be put more emphasis on those hard negative chemicals.…”
Section: Methodsmentioning
confidence: 99%
“…49 Expanding on that idea, to enhance the quality of negative samples, we leverage the output predictions of the first-stage model and focus on sampling those hard negative labels which have slightly high probabilities of being positive, and we refer to this method as hard negative sampling. 50,51 The concept of hard negative sampling in this work is illustrated in Figure 4. Since only the reagents and solvents with high enough probabilities (larger than 0.3) would be ranked in the second-stage model, it should be put more emphasis on those hard negative chemicals.…”
Section: Methodsmentioning
confidence: 99%
“…As the training continues, most samples become too easy, which contribute less to the training. Therefore, methods in (Tabassum et al, 2022;Robinson et al, 2020;Kalantidis et al, 2020; are proposed to use hard mining strategies to focus on informative samples. In this paper, considering the massive number of instances in M Ins , contrasting with all these instances naturally leads to redundancy and hinders the training.…”
Section: Graph Contrastive Learningmentioning
confidence: 99%
“…Additionally, HCL [39] revises the original InfoNCE objective by assigning higher weights for hard negatives among the mini-batch. Recently, Un-ReMix [44] is proposed to sample hard negatives by effectively capturing aspects of anchor similarity, representativeness, and model uncertainty. However, such locally sampled hard negatives cannot exploit hard negatives sufficiently from the dataset.…”
Section: Related Workmentioning
confidence: 99%