1994
DOI: 10.1613/jair.41
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction

Abstract: We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that aect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar compu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
26
0
2

Year Published

1995
1995
2014
2014

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 70 publications
(32 citation statements)
references
References 13 publications
(13 reference statements)
4
26
0
2
Order By: Relevance
“…In (Murphy & Pazzani, 1997) an exhaustive enumeration of decision trees on small datasets was performed in order to determine the validity of the principles of Occam's razor and oversearch for decision tree learning. In these experiments it was found that slightly larger trees can be found using complete search methods and that these trees can sometimes perform better than smaller trees found using heuristic trees.…”
Section: Exact Decision Tree Inductionmentioning
confidence: 99%
“…In (Murphy & Pazzani, 1997) an exhaustive enumeration of decision trees on small datasets was performed in order to determine the validity of the principles of Occam's razor and oversearch for decision tree learning. In these experiments it was found that slightly larger trees can be found using complete search methods and that these trees can sometimes perform better than smaller trees found using heuristic trees.…”
Section: Exact Decision Tree Inductionmentioning
confidence: 99%
“…If AIC can be defined for decision trees, then it can presumably only take the form of penalising the log-likelihood with twice the number of nodes. In the case of binary decision trees, this is equivalent to a penalty of the number of nodes, which is the penalty function adopted in the binary tree study by Murphy and Pazzani ([1994]). Even if we are to permit this interpretation of AIC, MML has empirically been shown to work decidedly better on this problem in (Needham and Dowe [2001]).…”
Section: 43)mentioning
confidence: 99%
“…We do not claim that stability is always desirable. For example, in some situations, we may want to discover all concepts that are consistent with the training data (Murphy & Pazzani, 1994). Thus we may sometimes define a measure of bias correctness that is based on accuracy alone, but there would not be rauch interest in a measure based on stability alone.…”
Section: Improving Stabilitymentioning
confidence: 99%