2021
DOI: 10.1007/978-3-030-86523-8_44
|View full text |Cite
|
Sign up to set email alerts
|

Small-Vote Sample Selection for Label-Noise Learning

Abstract: The small-loss criterion is widely used in recent label-noise learning methods. However, such a criterion only considers the loss of each training sample in a mini-batch but ignores the loss distribution in the whole training set. Moreover, the selection of clean samples depends on a heuristic clean data rate. As a result, some noisy-labeled samples are easily identified as clean ones, and vice versa. In this paper, we propose a novel yet simple sample selection method, which mainly consists of a Hierarchical … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…Inside, cyclical training records the historical loss of each sample, then calculates and ranks every sample's normalized average loss to recognize the noisy data with a higher score. Later, some works introduce the value of historical loss to determine whether the data is with noisy label [142]- [145]. To achieve a better performance of cleaning samples, Xu et al [142] also use the historical loss distribution to handle the issue of varying scales of loss distributions across different training periods.…”
Section: Publication Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Inside, cyclical training records the historical loss of each sample, then calculates and ranks every sample's normalized average loss to recognize the noisy data with a higher score. Later, some works introduce the value of historical loss to determine whether the data is with noisy label [142]- [145]. To achieve a better performance of cleaning samples, Xu et al [142] also use the historical loss distribution to handle the issue of varying scales of loss distributions across different training periods.…”
Section: Publication Methodsmentioning
confidence: 99%
“…Later, some works introduce the value of historical loss to determine whether the data is with noisy label [142]- [145]. To achieve a better performance of cleaning samples, Xu et al [142] also use the historical loss distribution to handle the issue of varying scales of loss distributions across different training periods. Although average loss has shown promising results [141], the existence of outliers in the training data can result in deviations when cleaning the data.…”
Section: Publication Methodsmentioning
confidence: 99%
“…Several methods detect clean samples as the ones for which a small loss is obtained [3, 2], motivated by the observation that neural networks first learn the correct labels and only afterwards overfit the noisy ones. In [2], scores are assigned to small-loss samples using so-called local votes, that take into account only the current batch and global votes, which are computed using the training history. Co-teaching [3] is based on the smallloss selection strategy, but uses two networks which select clean instances for each other.…”
Section: Related Workmentioning
confidence: 99%
“…Then, a constant number of samples is selected. We note, however, that other strategies are possible, such as fitting a Gaussian mixture model on the obtained scores, as in [2] or defining adaptive perclass thresholds as in [5]. We study two variations of the proposed approach, by considering two types of initialization: zero-initialization and autoencoder (AE) initialization.…”
Section: Proposed Approachmentioning
confidence: 99%
See 1 more Smart Citation