2022
DOI: 10.1007/978-3-030-97546-3_52
|View full text |Cite
|
Sign up to set email alerts
|

Better Self-training for Image Classification Through Self-supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Self-training has shown promising progress in many domains including vision [2,3,20], NLP [21], and speech [22]. Our method is more closely related to the self-training approaches proposed for semi-supervised learning [23,24,25], where pseudo-labels on unlabeled data are used as training targets.…”
Section: Related Workmentioning
confidence: 99%
“…Self-training has shown promising progress in many domains including vision [2,3,20], NLP [21], and speech [22]. Our method is more closely related to the self-training approaches proposed for semi-supervised learning [23,24,25], where pseudo-labels on unlabeled data are used as training targets.…”
Section: Related Workmentioning
confidence: 99%
“…Results we have shown on training with random labels are reminiscent of works in the field of semi-supervised training. Here, the goal is to find auto-generated "pretext tasks" such that training on them lead to the model learning good representations which are useful for downstream tasks [22,16,5]. This is often done with auto-generated labels such as predicting image rotations [7].…”
Section: Related Workmentioning
confidence: 99%
“…This was later extended to include a measure of confidence in (Shi et al 2018). Closely related is the concept of selftraining which iteratively integrates into training the most confident of these pseudo-labeled samples and repeats (Dong and Schäfer 2011;Sahito, Frank, and Pfahringer 2021;Xie et al 2020b). These techniques can become unstable when pseudo-label error accumulates across iterations.…”
Section: Introductionmentioning
confidence: 99%