2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00046
|View full text |Cite
|
Sign up to set email alerts
|

Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
29
0
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(31 citation statements)
references
References 20 publications
1
29
0
1
Order By: Relevance
“…The above results also imply when the noise rate is high such that Ỹ starts to become uninformative, dropping the noisy labels and focusing on obtaining the disentangled features helps with the identifiability of T (X). This observation helps explain recent successes in applying semi-supervised (Cheng et al, 2020a;Li et al, 2020;Nguyen et al, 2019) and self-supervised (Cheng et al, 2021;Zheltonozhskii et al, 2022;Ghosh & Lan, 2021) learning techniques to the problem of learning from noisy labels.…”
Section: Disentangled Featuresmentioning
confidence: 67%
“…The above results also imply when the noise rate is high such that Ỹ starts to become uninformative, dropping the noisy labels and focusing on obtaining the disentangled features helps with the identifiability of T (X). This observation helps explain recent successes in applying semi-supervised (Cheng et al, 2020a;Li et al, 2020;Nguyen et al, 2019) and self-supervised (Cheng et al, 2021;Zheltonozhskii et al, 2022;Ghosh & Lan, 2021) learning techniques to the problem of learning from noisy labels.…”
Section: Disentangled Featuresmentioning
confidence: 67%
“…By introducing soft labels, our CMW-Net-SL achieves superior performance to recent SOTA methods, DivideMix and ELR. Furthermore, by combining the self-supervised pretraining technique proposed in the C2D method [88], we further boost the performance. In Fig.…”
Section: Learning With Real-world Noisy Datasetsmentioning
confidence: 99%
“…Following the standard protocol [37], we test the trained model on the WebVision validation set and the ImageNet validation set. Following C2D [88], we used ResNet-50 architecture as the classifier network for training. For self-supervised pre-training, we directly use the pretrained self-supervised models released at https://github.com/ContrastToDivide/C2D, which is based on the SimCLR implementation.…”
Section: Mini-webvisionmentioning
confidence: 99%
“…Overview. Following the two-stage learning style [13,32,41,65], our aim is to induce robust pre-trained representations of instances for learning with noisy labels.…”
Section: Selective-supervised Contrastive Learningmentioning
confidence: 99%
“…Recent works [7,8,13,18,65] show that, working in a pair-wise manner, contrastive learning (CL) methods can bring good latent representations to help following tasks. Based on whether supervised information is provided, the CL methods can be grouped into supervised contrastive learning (Sup-CL) [27] and unsupervised contrastive learning (Uns-CL) [7,8,18].…”
Section: Introductionmentioning
confidence: 99%