2012
DOI: 10.1007/s10472-012-9325-7
|View full text |Cite
|
Sign up to set email alerts
|

PAC-learning in the presence of one-sided classification noise

Abstract: We derive an upper and a lower bound on the sample size needed for PAClearning a concept class in the presence of one-sided classification noise. The upper bound is achieved by the strategy "Minimum One-sided Disagreement". It matches the lower bound (which holds for any learning strategy) up to a logarithmic factor. Although "Minimum One-sided Disagreement" often leads to NP-hard combinatorial problems, we show that it can be implemented quite efficiently for some simple concept classes like, for example, uni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2014
2014

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Every problem known to be efficiently PAC-learnable is also known to be efficiently learnable with one-sided random classification noise, although no formal relationship is proven so far (see Simon [15] for further discussion of the one-sided random classification noise model). Thus [2] implies the efficient independent MIL-PAC-learnability of all known efficiently PAC-learnable classes.…”
Section: Introductionmentioning
confidence: 99%
“…Every problem known to be efficiently PAC-learnable is also known to be efficiently learnable with one-sided random classification noise, although no formal relationship is proven so far (see Simon [15] for further discussion of the one-sided random classification noise model). Thus [2] implies the efficient independent MIL-PAC-learnability of all known efficiently PAC-learnable classes.…”
Section: Introductionmentioning
confidence: 99%
“…The results above show that the minimum one-sided disagreement approach (Simon 2012) can be used to learn instance concepts from MI data. Below, we show that the asymmetry of this approach is not required when learning under other performance metrics.…”
Section: Learning High-auc Instance Conceptsmentioning
confidence: 82%
“…δ examples using a "minimum one-sided disagreement" strategy (Simon 2012). This strategy entails choosing a classifier that minimizes the number of disagreements on positivelylabeled examples while perfectly classifying all negatively-labeled examples.…”
Section: Learning Accurate Instance Conceptsmentioning
confidence: 99%