2001
DOI: 10.1007/3-540-44581-1_22
|View full text |Cite
|
Sign up to set email alerts
|

Pattern Recognition and Density Estimation under the General i.i.d. Assumption

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2003
2003
2016
2016

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(18 citation statements)
references
References 8 publications
0
18
0
Order By: Relevance
“…PAC theory only assumes that the instances are generated independently by some completely unknown distribution, but for the resulting bounds to be interesting in practice, the data set must be quite clean. Unfortunately, this is rarely the case for real-world data, which will lead to very loose bounds, see e.g., Nouretdinov et al (2001), where the crudeness of PAC theory is demonstrated. In addition, the PAC bounds are for the overall error and not for individual predictions.…”
Section: Related Workmentioning
confidence: 99%
“…PAC theory only assumes that the instances are generated independently by some completely unknown distribution, but for the resulting bounds to be interesting in practice, the data set must be quite clean. Unfortunately, this is rarely the case for real-world data, which will lead to very loose bounds, see e.g., Nouretdinov et al (2001), where the crudeness of PAC theory is demonstrated. In addition, the PAC bounds are for the overall error and not for individual predictions.…”
Section: Related Workmentioning
confidence: 99%
“…If this is not the case, which is not for the majority of data sets, the bounds obtained from these methods are very loose and as such they are not very useful in practice. A demonstration of the crudeness of PAC bounds can be found in (Nouretdinov et al, 2001a), where there is an example of Littlestone and Warmuth's bound (found in (Cristianini & Shawe-Taylor, 2000), Theorems 4.25 and 6.8) applied to the USPS data set. In addition, PAC theory has two other drawbacks: (a) the majority of relevant results either involve large explicit constants or do not specify the relevant constants at all; (b) the bounds obtained by PAC theory are for the overall error and not for individual test examples.…”
Section: Introductionmentioning
confidence: 99%
“…Nouretdinov also illustrated in [51] that the error bound becomes 0.74 when the Littlestone-Warmuth theorem is extended to multi-class classifiers for this dataset. In summary, the limitations of the PAC learning theory in the context of obtaining reliable confidence measure values are:…”
Section: Limitationsmentioning
confidence: 78%
“…However, the error bound values generated by such approaches are often not very practical, as demonstrated by Proedrou in [41], and by Nouretdinov in [51]. For example, Littlestone-Warmuth's Theorem is known to be one of the most sound results in PAC theory.…”
Section: Limitationsmentioning
confidence: 99%