2020
DOI: 10.1007/978-3-030-64984-5_34
|View full text |Cite
|
Sign up to set email alerts
|

Transfer of Pretrained Model Weights Substantially Improves Semi-supervised Image Classification

Abstract: Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training. Creating a large number of labeled examples requires considerable resources, time, and effort.If labeling new data is not feasible, so-called semi-supervised learning can achieve better generalisation than purely supervised learning by employing unlabeled instances as well as labeled ones. The work presented in this paper is m… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…In this paper, we empirically compare three different ways of integrating selfsupervised learning with self-training. In addition to testing them in a setting with random initial weights and training with cross-entropy loss, we also consider the effect of metric learning losses and transfer learning, as they have shown promise in self-training [18,19,20].…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we empirically compare three different ways of integrating selfsupervised learning with self-training. In addition to testing them in a setting with random initial weights and training with cross-entropy loss, we also consider the effect of metric learning losses and transfer learning, as they have shown promise in self-training [18,19,20].…”
Section: Introductionmentioning
confidence: 99%