2019
DOI: 10.1007/978-3-030-16148-4_2
|View full text |Cite
|
Sign up to set email alerts
|

Cost Sensitive Learning in the Presence of Symmetric Label Noise

Abstract: In binary classification framework, we are interested in making cost sensitive label predictions in the presence of uniform/symmetric label noise. We first observe that 0-1 Bayes classifiers are not (uniform) noise robust in cost sensitive setting. To circumvent this impossibility result, we present two schemes; unlike the existing methods, our schemes do not require noise rate. The first one uses α-weighted γ-uneven margin squared loss function, lα,usq, which can handle cost sensitivity arising due to domain … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…For imbalanced and symmetric label noise (ρ < 0.5) corrupted data, majority (minority) class continues to be in majority (minority) (Lemma 3 [16]).…”
Section: Wgans Based Schemes For Noisy Imbalanced Datamentioning
confidence: 99%
See 3 more Smart Citations
“…For imbalanced and symmetric label noise (ρ < 0.5) corrupted data, majority (minority) class continues to be in majority (minority) (Lemma 3 [16]).…”
Section: Wgans Based Schemes For Noisy Imbalanced Datamentioning
confidence: 99%
“…In the last decade, label noise problem has gained a lot of attention from researchers due to its prevalence in various real life situations. [12,13,14,15,5,16] focus on effect of label noise problems on non-deep classification schemes and provide solutions which either consider a label noise robust loss function or modifies the loss function to make it robust. For deep learning schemes, [17] propose two algorithms called forward and backward loss correction to learn from label noise corrupted data, [18] identify label noise robust loss functions to be used by neural networks and [19] provide consistency results when the noise is instance dependent.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…As for binary classification task, Masnadishirazi and Vasconcelos [26] proposed a savage loss based on boost algorithms and theoretically proved its robustness to outliers. Tripath and Hemachandra [27] have demonstrated that 0-1 loss is tolerant for uniform label noise. These loss functions, however, are neither appropriate for multi-classification scenarios nor training a very deep neural network.…”
Section: Related Workmentioning
confidence: 99%