2018
DOI: 10.1007/s10994-018-5715-3
|View full text |Cite
|
Sign up to set email alerts
|

Learning from binary labels with instance-dependent noise

Abstract: Suppose we have a sample of instances paired with binary labels corrupted by arbitrary instance-and label-dependent noise. With sufficiently many such samples, can we optimally classify and rank instances with respect to the noise-free distribution? We provide a theoretical analysis of this question, with three main contributions. First, we prove that for instance-dependent noise, any algorithm that is consistent for classification on the noisy distribution is also consistent on the clean distribution. Second,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
77
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 67 publications
(77 citation statements)
references
References 37 publications
0
77
0
Order By: Relevance
“…Improvements on this direction may also widen the applicability to massively multi-class scenarios. It remains an open question whether instance-dependent noise may be included into our approach [42,25]. Finally, we anticipate the use of our approach as a tool for pre-training models with noisy data from the Web, in the spirit of [17].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Improvements on this direction may also widen the applicability to massively multi-class scenarios. It remains an open question whether instance-dependent noise may be included into our approach [42,25]. Finally, we anticipate the use of our approach as a tool for pre-training models with noisy data from the Web, in the spirit of [17].…”
Section: Discussionmentioning
confidence: 99%
“…classdependent), label noise can produce solutions that are akin to random guessing [22]. On the other hand, the Bayes-optimal classifier remains unchanged under symmetric [28,26] and even instance dependent label noise [25] implying that highcapacity models are robust to essentially any level of such noise, given sufficiently many samples.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Segmentation with inaccurate or imprecise annotations refers to the scenario where the ground truth labels are corrupted with (random, class-conditional or instance-conditional [280], [281]) noises, thus also referring to noisy label learning [282], [283]. Imprecise boundaries, and mislabeling are also inaccurate annotations.…”
Section: Inaccurately-supervised Segmentationmentioning
confidence: 99%
“…They can either be learned in advance [12] or jointly with the rest of the model with an extra layer [13][14][15][16]. Prior work has also used a noise model conditioned on the input features [17,18]. However, these models cannot be directly applied to ASR as they do not handle sequential inputs and arbitrary-length outputs.…”
Section: Introductionmentioning
confidence: 99%