2016
DOI: 10.1109/tpami.2015.2456899
|View full text |Cite
|
Sign up to set email alerts
|

Classification with Noisy Labels by Importance Reweighting

Abstract: Abstract-In this paper, we study a classification problem in which sample labels are randomly corrupted. In this scenario, there is an unobservable sample with noise-free labels. However, before being observed, the true labels are independently flipped with a probability ρ ∈ [0, 0.5), and the random label noise can be class-conditional. Here, we address two fundamental problems raised by this scenario. The first is how to best use the abundant surrogate loss functions designed for the traditional classificatio… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
412
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 698 publications
(414 citation statements)
references
References 35 publications
2
412
0
Order By: Relevance
“…Using a probabilistic model for embedded label noise, algorithms such as robust logistic regression [116] [117] have been shown to be more robust to such noise than the originals. Among these algorithms, surrogate loss functions are widely studied [118] [119]. The core idea of these algorithms is to revise the bias in the risk function using importance reweighting…”
Section: Noisy Label Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Using a probabilistic model for embedded label noise, algorithms such as robust logistic regression [116] [117] have been shown to be more robust to such noise than the originals. Among these algorithms, surrogate loss functions are widely studied [118] [119]. The core idea of these algorithms is to revise the bias in the risk function using importance reweighting…”
Section: Noisy Label Learningmentioning
confidence: 99%
“…, ^ * (ℎ( ), ^) (18) The optimality and convergence of reweighting have been demonstrated in [118] [119]. The critical difficulty in this approach lies in estimating the hidden parameters in the probabilistic noise model or the revision coefficients in the objective function.…”
Section: Noisy Label Learningmentioning
confidence: 99%
“…A sample is then drawn from the distribution p(dθ M |X, M), which is approximately distributed according to the posterior p(θ M |X, M). Note that the importance sampling has also been applied to classification problems [11].…”
Section: The Importance Density and The Sampling Importance Resamplinmentioning
confidence: 99%
“…However, face recognition still faces a lot of challenges such as the various lightings, facial expressions, poses and environments [10,13,14,33,35,42,54,57,65,68]. In order to overcome these challenges, a lot of representation-based classification methods (RBCMs) [15,29,31,37,52,53,63,64,68] are proposed such as SRC [52], collaborative representation classification (CRC) [63], two-phase test sample representation (TPTSR) [53], linear regression classification (LRC) [37], feature space representation method [61], an improvement to the nearest neighbor (INNC) classification [55], etc. SRC tries to represent the test sample by an optimal linear combination of the training samples.…”
Section: Introductionmentioning
confidence: 99%