2021
DOI: 10.48550/arxiv.2101.06480
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(18 citation statements)
references
References 12 publications
0
18
0
Order By: Relevance
“…Baseline methods We compare to a number of recent methods such as FixMatch (Sohn et al, 2020), Mix-Match (Berthelot et al, 2019b), DASH (Xu et al, 2021), SelfMatch (Kim et al, 2021), Mean Teacher (Tarvainen and Valpola, 2017), Virtual Adversarial Training (Miyato et al, 2018), and Mixup (Berthelot et al, 2019b).…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…Baseline methods We compare to a number of recent methods such as FixMatch (Sohn et al, 2020), Mix-Match (Berthelot et al, 2019b), DASH (Xu et al, 2021), SelfMatch (Kim et al, 2021), Mean Teacher (Tarvainen and Valpola, 2017), Virtual Adversarial Training (Miyato et al, 2018), and Mixup (Berthelot et al, 2019b).…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…S 4 L (Zhai et al 2019) integrated two pretext-based self-supervised approaches in SSL and showed that unsupervised representation learning complements existing SSL methods. SelfMatch (Kim et al 2021) pre-trained the model on unlabeled data with SOTA selfsupervised contrastive learning techniques and re-trained on the whole dataset with SSL approaches. In SIMPLE (Hu et al 2021), a revised pair-loss was introduced to explore the relations among unlabeled samples.…”
Section: Related Workmentioning
confidence: 99%
“…All the related works are sorted by their publication date. Results with * was reported in FixMatch(Sohn et al 2020), while results with † comes from the most recent papers(Kim et al 2021;Li, Xiong, and Hoi 2020;Xu et al 2021;Abuduweili et al 2021), respectively.…”
mentioning
confidence: 97%
“…To maximize the value of the limited labels, existing works either try to maintain the consistency by competing for the introduced perturbations [45], [46] or seek the relationship among different samples [47], [48]. Self-supervised learning [49]- [52] is a feasible way to learn the visual representation for semi-supervised learning, which can somehow be a complement to the lack of annotations. Specific to medical image segmentation, Xia et al [53] proposed uncertainty-aware multi-view co-training for 3D volumetric medical image segmentation.…”
Section: B Reducing Annotation Efforts For Medical Image Segmentationmentioning
confidence: 99%