1996
DOI: 10.1162/neco.1996.8.7.1341
|View full text |Cite
|
Sign up to set email alerts
|

The Lack of A Priori Distinctions Between Learning Algorithms

Abstract: This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower expected OTS err… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

19
655
1
22

Year Published

1997
1997
2020
2020

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 1,315 publications
(697 citation statements)
references
References 5 publications
19
655
1
22
Order By: Relevance
“…As the "No Free Lunch" states, there is no perfect algorithm [Wolpert 1996]. As with all modeling situations it is important to find the right tool for the job.…”
Section: Resultsmentioning
confidence: 99%
“…As the "No Free Lunch" states, there is no perfect algorithm [Wolpert 1996]. As with all modeling situations it is important to find the right tool for the job.…”
Section: Resultsmentioning
confidence: 99%
“…By general-purpose or special-purpose we want to distinguish methods that apply to a large class versus a tiny class of tasks. Keeping in mind that there exists no completely universal statistical learning algorithm (Wolpert, 1996), it suffices that such broadly applicable generalization principles be relevant to the type of learning tasks that we care about, such as those solved by humans and animals.…”
Section: What Is Neededmentioning
confidence: 99%
“…In addition to reporting and discussing the tabularized results, a Friedman test is conducted with a Nemenyi post hoc test [70] to rank the methods and describe their critical differences under all metric and problem type conditions. This is done to give some measure of generalizability [71,72] (commonly done when evaluating multiple classifiers over multiple problem instances), although problem instances and experimental variations are not exhaustive.…”
Section: Segment Quality Comparison and Methods Rankingmentioning
confidence: 99%