2022
DOI: 10.48550/arxiv.2203.14370
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CaCo: Both Positive and Negative Samples are Directly Learnable via Cooperative-adversarial Contrastive Learning

Abstract: As a representative self-supervised method, contrastive learning has achieved great successes in unsupervised training of representations. It trains an encoder by distinguishing positive samples from negative ones given query anchors. These positive and negative samples play critical roles in defining the objective to learn the discriminative encoder, avoiding it from learning trivial features. While existing methods heuristically choose these samples, we present a principled method where both positive and neg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…For instance, a hard negative sampling strategy [178] mines the negative pairs that are similar to the samples but likely belong to different classes. To train negative pairs and (or) positive pairs by adversarial training [179], [180] learn a set of "adversarial negatives" confused with the given samples, or "cooperative positives" similar to the given samples. These strategies are designed to find the better negative and positive pairs for improving contrastive learning.…”
Section: Discriminative Modelsmentioning
confidence: 99%
“…For instance, a hard negative sampling strategy [178] mines the negative pairs that are similar to the samples but likely belong to different classes. To train negative pairs and (or) positive pairs by adversarial training [179], [180] learn a set of "adversarial negatives" confused with the given samples, or "cooperative positives" similar to the given samples. These strategies are designed to find the better negative and positive pairs for improving contrastive learning.…”
Section: Discriminative Modelsmentioning
confidence: 99%