2019
DOI: 10.1109/access.2019.2918794
|View full text |Cite
|
Sign up to set email alerts
|

Recycling: Semi-Supervised Learning With Noisy Labels in Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 33 publications
(24 citation statements)
references
References 13 publications
0
24
0
Order By: Relevance
“…4b), and mixed flipping (Fig. 4c) (Tanaka et al, 2018;Han et al, 2018c;Song et al, 2019;Yu et al, 2019;Kong et al, 2019).…”
Section: B2 Datasetmentioning
confidence: 96%
See 2 more Smart Citations
“…4b), and mixed flipping (Fig. 4c) (Tanaka et al, 2018;Han et al, 2018c;Song et al, 2019;Yu et al, 2019;Kong et al, 2019).…”
Section: B2 Datasetmentioning
confidence: 96%
“…We used CIFAR-10 ( Krizhevsky et al, 2009), CIFAR-100 (Krizhevsky et al, 2009), and Tiny-ImageNet (Tin), which are widely used benchmark datasets in the noisy label fields (Han et al, 2018c;Song et al, 2019;Yu et al, 2019;Kong et al, 2019;Arazo et al, 2019;Chen et al, 2019;Kim et al, 2019). CIFAR-10 and CIFAR-100 consist of 10 and 100 classes, respectively, of 32 × 32 color images.…”
Section: B2 Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…Based on this, Recycling, 14 which belongs to the second approach mentioned in Section 1, improves the generalization of the network through semisupervised learning. It first selects clean and noisy samples using the sample selection method based on small‐loss criteria that assumes that the probability of a sample being clean is high as the loss of the sample is low.…”
Section: Background Knowledgementioning
confidence: 99%
“…In the first approach, 12,13 clean samples are selected among the training data, and the network is trained using only the selected samples. In the second approach, 14,15 clean samples are separated from noisy samples, and the network is trained using clean samples with original labels and noisy samples with corrected labels. In general, a better performance is achieved with the second approach than with the first because more data are used for training.…”
Section: Introductionmentioning
confidence: 99%