1997
DOI: 10.1016/s0031-3203(97)00005-8
|View full text |Cite
|
Sign up to set email alerts
|

Structure-driven induction of decision tree classifiers through neural learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

1998
1998
2018
2018

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(10 citation statements)
references
References 23 publications
0
10
0
Order By: Relevance
“…Another approach is to build a fully split tree and then prune back certain leaves that are considered overly specific. An alternative to these top-down construction methods is a recently proposed approach that determines the structure of the tree first, and then determines all the splits simultaneously by optimizing a global criterion [29].…”
Section: Methods For Constructing a Decision Treementioning
confidence: 99%
“…Another approach is to build a fully split tree and then prune back certain leaves that are considered overly specific. An alternative to these top-down construction methods is a recently proposed approach that determines the structure of the tree first, and then determines all the splits simultaneously by optimizing a global criterion [29].…”
Section: Methods For Constructing a Decision Treementioning
confidence: 99%
“…Currently, the neural tree algorithms have been widely applied in various fields (e.g., Tsujino and Nishida, 1995;Sethi and Yoo, 1997;Bhattacharya and Solomatine, 2005;Maji, 2008;Etemad-Shahidi and Mahjoobi, 2009;Su et al, 2010;Selamat and Ng, 2011). This study proposes a neural-based decision tree method, which is one of the neural tree algorithms.…”
Section: Literature Reviewsmentioning
confidence: 99%
“…Then, the effective features were trained and tested using the SVM to determine the level of valve leakage. To verify the SVM models, the performance of the classifiers, including the accuracy, the Cohen's kappa number, and the training time, were compared to the corresponding data from the k-nearest neighbor classifier (k-NN) (Liao et al, 2002), neural network classifier (NN) (Yu et al, 2006), naive bayes classifier (NB) (Jiang et al, 2012), and decision tree classifier (DT) (Sethi, 1997).…”
mentioning
confidence: 99%