2013
DOI: 10.1007/978-3-319-00960-5_2
|View full text |Cite
|
Sign up to set email alerts
|

Techniques of Decision Tree Induction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 127 publications
0
9
0
Order By: Relevance
“…As a result, the classification of the instances continues down to each internal node based on the value of the characteristic determined in the test. If a leaf, a node with a single incoming edge, is reached then the classification is decided by its class [ 36 ]. The main advantage of DT is that it can be visualized easily and is relatively simple to interpret.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…As a result, the classification of the instances continues down to each internal node based on the value of the characteristic determined in the test. If a leaf, a node with a single incoming edge, is reached then the classification is decided by its class [ 36 ]. The main advantage of DT is that it can be visualized easily and is relatively simple to interpret.…”
Section: Methodsmentioning
confidence: 99%
“…The DT algorithm can classify both numerical and categorical data while other algorithms can only handle one type of variable. DT has promising results in terms of minimum computation time and thus considered reliable for real-time systems [ 36 ]. The C4.5 decision tree was proposed by Ross Quinlan [ 37 ] and its rule generation has significantly sped up the training procedure.…”
Section: Methodsmentioning
confidence: 99%
“…The unsupervised greedy stepwise forward attribute evaluation was performed with 1,449 instances to determine the most important parameters in predicting P. xylostella abundance. The M5P pruned decision-tree algorithm output is a flowchart with a tree structure, where each internal node denotes a test on an attribute, each branch representing a result of the test, and each leaf node holding a class label (Barros, de Carvalho, & Freitas, 2015;Grabczewski, 2014;Sharma, Kumar, & Maheshwari, 2015). M5P was selected since it is ideal for use with numeric data and was superior to the general linear model when the two methods were compared (Sharma et al, 2015).…”
Section: M5p Pruned Decision-tree Algorithm Induction and Hyperparam-mentioning
confidence: 99%
“…However, when dealing with large datasets, decision trees tend to become very large, complex, and difficult to understand. Consequently, the rules constructed through 'Divide and Conquer' strategy and extracted from the resulting decision tree inherit the tree's complexity and thus may have unnecessary repeated tests which can lead to redundant rulesets [1], [2]. In this context, research has been carried out aiming to overcome or reduce these problems and to produce a simple reliable ruleset using pruning methods.…”
Section: Introductionmentioning
confidence: 99%
“…In this context, research has been carried out aiming to overcome or reduce these problems and to produce a simple reliable ruleset using pruning methods. Among others, C4.5rules [3], [4] and CART [2] are examples of such algorithms. Nevertheless, [5]- [7] argue that there is no single study which adequately achieves this goal.…”
Section: Introductionmentioning
confidence: 99%