2007
DOI: 10.1007/s00453-007-0037-z
|View full text |Cite
|
Sign up to set email alerts
|

Sample Complexity for Computational Classification Problems

Abstract: In a statistical setting of the classification (pattern recognition) problem the number of examples required to approximate an unknown labelling function is linear in the VC dimension of the target learning class. In this work we consider the question of whether such bounds exist if we restrict our attention to computable classification methods, assuming that the unknown labelling function is also computable. We find that in this case the number of examples required for a computable method to approximate the l… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…is disjunctive in such theories. In fact, the works in [28,104] raise some critical questions about the adequacy of the PAC learning framework for designing ML algorithms with limited computation complexity. These questions have led to a substantial amount of work modifying the PAC learning paradigm to provide an explicit tradeoff between sample and computation complexity.…”
Section: 22mentioning
confidence: 99%
“…is disjunctive in such theories. In fact, the works in [28,104] raise some critical questions about the adequacy of the PAC learning framework for designing ML algorithms with limited computation complexity. These questions have led to a substantial amount of work modifying the PAC learning paradigm to provide an explicit tradeoff between sample and computation complexity.…”
Section: 22mentioning
confidence: 99%
“…The normalized maximum likelihood distribution considered above does not in general lead to the optimum solution for this problem. The optimum solution is obtained through the result that relates the minimax (12) to the so-called channel capacity. For a set A of measures on a finite set X the channel capacity of A is defined as…”
Section: ⊓ ⊔mentioning
confidence: 99%
“…Conceptually this is a much simpler problem; however, in some cases it can be intractable (see e.g. [12]). In general, for each particular class of classifiers a separate algorithm should be constructed to find efficiently a classifier that fits the data.…”
Section: ⊓ ⊔mentioning
confidence: 99%