2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9533616
|View full text |Cite
|
Sign up to set email alerts
|

ReLaB: Reliable Label Bootstrapping for Semi-Supervised Learning

Abstract: Reducing the amount of labels required to train convolutional neural networks without performance degradation is key to effectively reduce human annotation efforts. We propose Reliable Label Bootstrapping (ReLaB), an unsupervised preprossessing algorithm which improves the performance of semi-supervised algorithms in extremely low supervision settings. Given a dataset with few labeled samples, we first learn meaningful self-supervised, latent features for the data. Second, a label propagation algorithm propaga… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 42 publications
0
1
0
Order By: Relevance
“…Label noise proposes robust algorithms to mitigate approximate labeling. Approximate labelling can occur when a dataset is created from web queries [31] or when labels are inferred using label propagation [2]. Solutions for training a neural network on label noise datasets include lowering the contribution of noisy labels in the training loss [23], correcting the label using the network prediction [4], metalearning inspired corrections [57], monitoring feature space consistency [42], or robust data augmentation [59].…”
Section: Semi-supervised Learning and Label Noisementioning
confidence: 99%
“…Label noise proposes robust algorithms to mitigate approximate labeling. Approximate labelling can occur when a dataset is created from web queries [31] or when labels are inferred using label propagation [2]. Solutions for training a neural network on label noise datasets include lowering the contribution of noisy labels in the training loss [23], correcting the label using the network prediction [4], metalearning inspired corrections [57], monitoring feature space consistency [42], or robust data augmentation [59].…”
Section: Semi-supervised Learning and Label Noisementioning
confidence: 99%