2009
DOI: 10.2991/jnmp.2009.2.1.7
|View full text |Cite
|
Sign up to set email alerts
|

Accuracy Evaluation of C4.5 and Naive Bayes Classifiers Using Attribute Ranking Method

Abstract: This paper intends to classify the Ljubljana Breast Cancer dataset using C4.5 Decision Tree and Naïve Bayes classifiers. In this work, classification is carriedout using two methods. In the first method, dataset is analysed using all the attributes in the dataset. In the second method, attributes are ranked using information gain ranking technique and only the high ranked attributes are used to build the classification model. We are evaluating the results of C4.5 Decision Tree and Naïve Bayes classifiers in te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…In Table III the average generalization performance (with standard deviation) over the 11 datasets with 5 models for every boosting algorithm are shown. For the purpose of more extensive comparison, we introduce other evaluation criterions (including recall, fscore, fp_rate, specificity, matthews 24,29,30 ).It's difficult to list all the results of 5 algorithms, so we just show the results of AdaBoost, LPBoost, StrongLPBoost Table 4). Note that except for the heart and diabetes datasets, the performance of StrongLPBoost is better than other boosting algorithms in almost all cases.…”
Section: Accuracymentioning
confidence: 98%
“…In Table III the average generalization performance (with standard deviation) over the 11 datasets with 5 models for every boosting algorithm are shown. For the purpose of more extensive comparison, we introduce other evaluation criterions (including recall, fscore, fp_rate, specificity, matthews 24,29,30 ).It's difficult to list all the results of 5 algorithms, so we just show the results of AdaBoost, LPBoost, StrongLPBoost Table 4). Note that except for the heart and diabetes datasets, the performance of StrongLPBoost is better than other boosting algorithms in almost all cases.…”
Section: Accuracymentioning
confidence: 98%