2014
DOI: 10.1016/j.eswa.2013.08.089
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid decision tree and naïve Bayes classifiers for multi-class classification tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
133
0
1

Year Published

2014
2014
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 348 publications
(134 citation statements)
references
References 28 publications
0
133
0
1
Order By: Relevance
“…· C4.5, standard decision tree. · NBTree [19], decision tree with naive Bayes as the leaf node. · RTAN [20], tree-augmented naive Bayes ensembles.…”
Section: Statistical Results On Uci Data Setsmentioning
confidence: 99%
“…· C4.5, standard decision tree. · NBTree [19], decision tree with naive Bayes as the leaf node. · RTAN [20], tree-augmented naive Bayes ensembles.…”
Section: Statistical Results On Uci Data Setsmentioning
confidence: 99%
“…Classification is a supervised learning method intended to build a detection model which could explain a data class, sort out data points or elements into classes that correspond to their features based on a sequence of pre-determined class label [13], [22]. Consequently, the structured label elements will be further used to predict the class label of new elements.…”
Section: IImentioning
confidence: 99%
“…Thus, clustering groups objects based on characterization of data points, where every single data point in a cluster is alike to those within its cluster, but different from those in different or other clusters [20], [21]. Clustering has capabilities to group similar malicious data points collectively into one or more cluster including previously unseen malicious data points [22]. In addition, k-means clustering is much more efficient as a contrast to other existing clustering methods as the fact that this algorithm able to process huge volumes of instances quickly and have nonlinear complexity.…”
Section: IImentioning
confidence: 99%
“…Decision trees are effectively applied when the number of object features is not great [7], [8]. Except random forests [9], boosting is applicable when the number of object features is in the order of hundreds or thousands [10], [11].…”
Section: Related Workmentioning
confidence: 99%