2024
DOI: 10.1109/tnnls.2022.3186930
|View full text |Cite
|
Sign up to set email alerts
|

Compressing Features for Learning With Noisy Labels

Abstract: Supervised learning can be viewed as distilling relevant information from input data into feature representations. This process becomes difficult when supervision is noisy as the distilled information might not be relevant. In fact, recent research [1] shows that networks can easily overfit all labels including those that are corrupted, and hence can hardly generalize to clean datasets. In this paper, we focus on the problem of learning with noisy labels and introduce compression inductive bias to network arch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…In particular, for sample reweighting, Co-teaching [47] is a classical method that cross-updates its two base networks on the small-loss samples selected by its peer. Based on this, Nested Co-teaching [50,51]…”
Section: Related Workmentioning
confidence: 94%
“…In particular, for sample reweighting, Co-teaching [47] is a classical method that cross-updates its two base networks on the small-loss samples selected by its peer. Based on this, Nested Co-teaching [50,51]…”
Section: Related Workmentioning
confidence: 94%
“…In particular, for sample reweighting, Co-teaching [31] is a classical method that cross-updates its two base networks on the small-loss samples selected by its peer. Based on this, Nested Co-teaching [32] improves performance by including compression regularization during the training.…”
Section: Related Workmentioning
confidence: 99%
“…The dataset is categorized into 14 classes, containing 1 , 0 0 0 , 0 0 0 training images with noise ratio ∼38% and 10 , 526 test images. We follow the preprocessing in [29,32] for this dataset.…”
Section: Robustness To Label Noisementioning
confidence: 99%
See 1 more Smart Citation
“…(Song et al 2019, Zhang et al 2021, Chen et al 2022, Gao et al 2022, Xia et al 2023. We conducted our method on different backbones, including ResNet-18, ResNet-34 and VGG-19.…”
mentioning
confidence: 99%