1995
DOI: 10.1007/3-540-59286-5_57
|View full text |Cite
|
Sign up to set email alerts
|

The power of decision tables

Abstract: Abstract. We evaluate the power of decision tables as a hypothesis space for supervised learning algorithms. Decision tables are one of the simplest hypothesis spaces possible, and usually they are easy to understand. Experimental results show that on artificial and real-world domains containing only discrete features, IDTM, an algorithm inducing decision tables, can sometimes outperform state-of-the-art algorithms such as C4.5. Surprisingly, performance is quite good on some datasets with continuous features,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
307
0
13

Year Published

2000
2000
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 551 publications
(320 citation statements)
references
References 29 publications
(23 reference statements)
0
307
0
13
Order By: Relevance
“…We further investigate the influence of α and β parameters of the RS metric function. The performance of RS metric is studied through various experimentations using the Weka [10] simulator on the following rule-based prediction models: DecisionTable [11], JRip [12], Nearest Neighbor with generalization (NNge) [13], PART [14], ConjunctiveRule [15] and Ridor [16] on the breakout dataset [1]. The breakout dataset consists of 236 samples of data from different users gathered Relevance As a Metric for Evaluating Machine Learning Algorithms 9 from the breakout area.…”
Section: Experimentation and Resultsmentioning
confidence: 99%
“…We further investigate the influence of α and β parameters of the RS metric function. The performance of RS metric is studied through various experimentations using the Weka [10] simulator on the following rule-based prediction models: DecisionTable [11], JRip [12], Nearest Neighbor with generalization (NNge) [13], PART [14], ConjunctiveRule [15] and Ridor [16] on the breakout dataset [1]. The breakout dataset consists of 236 samples of data from different users gathered Relevance As a Metric for Evaluating Machine Learning Algorithms 9 from the breakout area.…”
Section: Experimentation and Resultsmentioning
confidence: 99%
“…The supervised machine learning methods used in the experiments cover several different machine learning paradigms, and include Additive Regression (Friedman, 2002), Decision Table (Kohavi, 1995), Nearest Neighbor with a weighted condition (Aha et al, 1991), K* (Cleary et al, 1995), Locally Weighted Learning with Naive Bayes and Linear regression classifiers (Frank et al, 2002;Atkeson et al, 1997), Random Committee (Seung et al, 1992), and Random Trees (Aldous, 1993).…”
Section: Machine Learning Algorithmsmentioning
confidence: 99%
“…Decision Table (Kohavi, 1995) is a rule-based learning algorithm that depends on testing a set of data using conditional mathematics, and predict the values of new samples using the results of the training set. Nearest Neighbor with a weighted condition (Aha et al, 1991) is an instance-based method that predicts the outcome of a given situation using the distance equation to find the nearest dataset to predict the results.…”
Section: Machine Learning Algorithmsmentioning
confidence: 99%
“…DecisionTable: Decision Table is an accurate method for numeric prediction from decision trees and it is an ordered set of If-Then rules that have the potential to be more compact and therefore more understandable than the decision trees [39].…”
Section: Proc Of the Third Intl Conf On Advances In Bio-informaticmentioning
confidence: 99%