2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00389
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Learning with Scarce Annotations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(23 citation statements)
references
References 15 publications
0
22
0
1
Order By: Relevance
“…Recent contributions shows that coupling self-supervised and semi-supervised learning can increase the accuracy when few labels are available. Rebuffi et al [27] use RotNet [23] as a network initialization strategy, ReMixMatch [18] exploits RotNet [23] together with their semi-supervised algorithm to achieve stability with few labels, and EnAET [26] leverage transformation encoding from AET [39] to improve the consistency of predictions on transformed images.…”
Section: B Self-supervised Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Recent contributions shows that coupling self-supervised and semi-supervised learning can increase the accuracy when few labels are available. Rebuffi et al [27] use RotNet [23] as a network initialization strategy, ReMixMatch [18] exploits RotNet [23] together with their semi-supervised algorithm to achieve stability with few labels, and EnAET [26] leverage transformation encoding from AET [39] to improve the consistency of predictions on transformed images.…”
Section: B Self-supervised Learningmentioning
confidence: 99%
“…We combine our approach with state-ofthe-art pseudo-labeling [17] and consistency regularizationbased [18] semi-supervised methods to prove the stability of ReLaB when applied to different semi-supervised strategies. We use the default configuration for pseudo-labeling 1 except for the network initialization, where we make use of the Rotation self-supervised objective [23] and freeze all the layers up to the last convolutional block in a similar fashion to Rebufi et al [27]. We find that this is necessary to preserve strong early features throughout the training.…”
Section: A Datasets and Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Such self-supervision signals apply to both labeled and unlabeled data, and are argued to be able to teach models on studying high-level features. Instead of utilizing self-supervision as a pretraining stage, which is commonly adopted by most semi-supervised learning algorithms [16,14,19], self-supervision signals are incorporated with pairwise similarity information to supervise the model simultaneously in our proposed algorithm. Thus, supervising signals injected to the model will have strong regularization so that performance degradation caused by noisy and biased pairwise pseudo labels can be largely alleviated.…”
Section: Introductionmentioning
confidence: 99%
“…Such self-supervision signals apply to both labeled and unlabeled data, and are argued to be able to teach models on studying high-level features. Instead of utilizing self-supervision as a pre-training stage, which is commonly adopted for semi-supervised learning purpose [20], [81], [134], self-supervision signals are incorporated with pairwise similarity information to supervise the model simultaneously in the proposed algorithm. Thus, supervising signals injected to the model will have strong regularization so that performance degradation caused by noisy and biased pairwise pseudo labels can be largely alleviated.…”
Section: Background and Motivationsmentioning
confidence: 99%