2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021
DOI: 10.1109/iccvw54120.2021.00129
|View full text |Cite
|
Sign up to set email alerts
|

ScatSimCLR: self-supervised contrastive learning with pretext task regularization for small-scale datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…ADC [48] 53.0 IIC [49] 61 TSUK [50] 66.5 SCAN [16] 80.9 ScatSimCLR [30] 85.1 RUC [51] 86.7 MV-MR (ours) 89.7…”
Section: Methods Stl10mentioning
confidence: 99%
See 1 more Smart Citation
“…ADC [48] 53.0 IIC [49] 61 TSUK [50] 66.5 SCAN [16] 80.9 ScatSimCLR [30] 85.1 RUC [51] 86.7 MV-MR (ours) 89.7…”
Section: Methods Stl10mentioning
confidence: 99%
“…IIC [49] 25.7 TSUC [50] 35.3 SCAN [16] 50.7 RUC [51] 54.3 LFER Ensemble [52] 56.1 ScatSimCLR [30] 63.8…”
Section: Methods Cifar20mentioning
confidence: 99%
“…A well-crafted pretext task [26,27] is a cornerstone for achieving successful selfsupervised learning. The essence of such a task lies in setting up a specific objective within unlabeled data, enabling the model to glean meaningful representations from it.…”
Section: Self-supervised Learning With Cutpaste-mixmentioning
confidence: 99%
“…A base encoder and a projection head are then trained using a contrastive loss that we also leverage in one branch of ACM-Net, to maximize agreement between representations of these two views and minimize agreement with views originating from different images. Since training convergence of CL models may be difficult to achieve, and thus to regularize the training, ScatSimCLR [21] additionally regresses the augmentation parameters for each view. In ACM-Net, the CL training is regularized by adding a separate PL branch.…”
Section: Related Workmentioning
confidence: 99%