2007
DOI: 10.1142/9789812771728
|View full text |Cite
|
Sign up to set email alerts
|

Data Mining with Decision Trees - Theory and Applications

Abstract: Preface ix science, statistics and management . In addition , this book is highly useful to researchers in the social sciences , psychology, medicine , genetics, business intelligence , and other fields characterized by complex data-processing problems of underlying models.Since the material in this book formed the basis of undergraduate and graduates courses at Tel-Aviv University and Ben-Gurion University, it can also serve as a reference source for graduate/ advanced undergraduate level courses in knowledge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
615
1
57

Year Published

2011
2011
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 645 publications
(675 citation statements)
references
References 0 publications
2
615
1
57
Order By: Relevance
“…We used Quick, Unbiased, Efficient, Statistical trees (QUEST) [40] to determine the associations between predictor and response variables for both subpopulations. QUEST is one of several types of statistical trees; hierarchical analyses incorporating multiple predictor variables (or 'recursive partitioning') [41,42].…”
Section: Parasite Prevalence and Risk Factorsmentioning
confidence: 99%
See 1 more Smart Citation
“…We used Quick, Unbiased, Efficient, Statistical trees (QUEST) [40] to determine the associations between predictor and response variables for both subpopulations. QUEST is one of several types of statistical trees; hierarchical analyses incorporating multiple predictor variables (or 'recursive partitioning') [41,42].…”
Section: Parasite Prevalence and Risk Factorsmentioning
confidence: 99%
“…in the NPA relative to PA dog subpopulations remains supported. Moreover, upon re-analysing some of the statistics using only the senior operator's data, including both the χ 2 exact test [37] used to determine off-leash frequency as a potential confounder, as well as the QUEST [40] with Giardia spp. presence or absence as the dependent variable (data not shown), the results remained unchanged.…”
Section: ·720 <0·001mentioning
confidence: 99%
“…Introducing new learning algorithms is more or less a well-defined process, since there exists an algebraic framework for presenting decision algorithms and describing various splitting criteria [66]. The data structures most frequently used for keeping decision information (trees and tables) are also well-known, along with efficient implementations once we have specified their operations.…”
Section: Motivationmentioning
confidence: 99%
“…The number of tests necessary to reach a leaf is equal to the depth of the Decision Tree. This depth varies around the number n of context attributes: for discrete context attributes it is at most n; continuous attributes can even occur several times on the path due to data partitioning [66]. So, generally, the prediction time is depth × T aa and we approximate…”
Section: Non-functional Propertiesmentioning
confidence: 99%
“…Since the advent of the first decision tree (DT) learning algorithms, several decades ago, the researchers have come up with a number of criteria (called split criteria or split quality measures or selection measures) for top-down DT construction [1,10,8]. Some comparisons of such criteria [9,2] have been published.…”
Section: Introductionmentioning
confidence: 99%