2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
DOI: 10.1109/wacv56688.2023.00392
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Sample Selection for Robust Learning under Label Noise

Abstract: Deep Neural Networks (DNNs) have been shown to be susceptible to memorization or overfitting in the presence of noisily-labelled data.For the problem of robust learning under such noisy data, several algorithms have been proposed. A prominent class of algorithms rely on sample selection strategies wherein, essentially, a fraction of samples with loss values below a certain threshold are selected for training. These algorithms are sensitive to such thresholds, and it is difficult to fix or learn these threshold… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…( 7) CoDis [19]: trains two networks simultaneously and applies the covariance regularization method to maintain the divergence between the two networks. (8) Bare [48]: proposes an adaptive sample selection strategy to provide robustness against label noise.…”
Section: Comparison With Sota Methodsmentioning
confidence: 99%
“…( 7) CoDis [19]: trains two networks simultaneously and applies the covariance regularization method to maintain the divergence between the two networks. (8) Bare [48]: proposes an adaptive sample selection strategy to provide robustness against label noise.…”
Section: Comparison With Sota Methodsmentioning
confidence: 99%
“…Previous literature about sample selection attempts to detect noisy labels by exacerbating the natural resistance of neural networks to noise. BARE (Patel and Sastry 2023) proposes an adaptive sample selection strategy that relies only on batch statistics to provide robustness against label noise. Another line of research focuses on label correction, which typically attempts to rectify sample labels using the model predictions.…”
Section: Related Workmentioning
confidence: 99%
“…• Modifying loss function to automatically ignore or to weak the emphasis on mislabeled samples along with the model training [287].…”
Section: B Improving the Generalization Ability Of Models 1) Tackling...mentioning
confidence: 99%
“…• Sample bootstrapping which provides a selection method for clean samples to update the model by recognizing clean samples as the small-loss training samples [287]:…”
Section: B Improving the Generalization Ability Of Models 1) Tackling...mentioning
confidence: 99%