2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00013
|View full text |Cite
|
Sign up to set email alerts
|

Noise Is Also Useful: Negative Correlation-Steered Latent Contrastive Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…For example, the work (Jian, Gao, and Vosoughi 2022) contrasts examples from text and examples from another modality simultaneously while learning sentence embeddings. In (Yan et al 2022), a contrastive learning method is exploited in the latent metric space to explore the useful negative correlation hidden in noisy data, which can improve the robustness of DNNs. Similarly, the work (Wang et al 2021) extends contrastive learning to the multilabel classification task.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…For example, the work (Jian, Gao, and Vosoughi 2022) contrasts examples from text and examples from another modality simultaneously while learning sentence embeddings. In (Yan et al 2022), a contrastive learning method is exploited in the latent metric space to explore the useful negative correlation hidden in noisy data, which can improve the robustness of DNNs. Similarly, the work (Wang et al 2021) extends contrastive learning to the multilabel classification task.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…Then we dig into the details of the normal distribution of the policy initialization models. In the Adaptive Sampling with Reward (ASR) setting (Dou et al 2022b), the policy initialization distribution guides a sampling distribution of p (O n | O a ), which builds on the distance between negative sample O n and anchor sample O a in triplet loss construction (Chen et al 2019;Yan et al 2022;Dou, Luo, and Yang 2022). A high variance normal distribution policy initialization means that at the beginning of the sampling stage, the model will sample negative samples from a long range of distances from the anchor sample, which includes hard, semi-hard, and easy negative samples.…”
Section: Main Contributionsmentioning
confidence: 99%
“…Zhang et al (2021) proposes to sample counterfactual instances and learn inter-modality relationships by contrastive learning for visual commonsense reasoning. Contrastive learning is also widely used for weakly supervised learning (Du et al, 2022;Yan et al, 2022;Gao et al, 2022;Xie et al, 2022;Li et al, 2022b;Wan et al, 2022;Yi et al, 2022;Chen et al, 2021). In this paper, we leverage contrastive learning to address the bias and the noise problems in DS-MRC.…”
Section: Relate Workmentioning
confidence: 99%