2009 WRI World Congress on Computer Science and Information Engineering 2009
DOI: 10.1109/csie.2009.954
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study of Selected Classifiers with Classification Accuracy in User Profiling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
20
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 6 publications
1
20
0
Order By: Relevance
“…5 shows that NB outperforms BN and J48 classifiers with respect to the accuracy and mean absolute error of classification of BoTySeGa gameplay data. Our result confirms the previous result [23,24] …”
Section: A the Classification Resultssupporting
confidence: 83%
“…5 shows that NB outperforms BN and J48 classifiers with respect to the accuracy and mean absolute error of classification of BoTySeGa gameplay data. Our result confirms the previous result [23,24] …”
Section: A the Classification Resultssupporting
confidence: 83%
“…Moreover, Locally Weighted Learning (LWL), RepTree, Decision Table and SVMReg classifiers were compared for classification. Previous works [44], [45], [46] and [47] have been the first in the literature to present the comparison of the classification and clustering accuracy performance of different algorithms with user profiles. In [45] NB, Instance Based Learner (IBL), Bayesian Network (BN) and Lazy Bayesian Rules (LBR) classifiers were compared using a user profile dataset.…”
Section: Classification and Clustering Algorithms For User Profilingmentioning
confidence: 99%
“…In [45] NB, Instance Based Learner (IBL), Bayesian Network (BN) and Lazy Bayesian Rules (LBR) classifiers were compared using a user profile dataset. Furthermore in [46], Decision Tree (DT) algorithms to be used for user profiling (i.e. Classification and Regression Tree (SimpleCART), NBTree, Id3, J48 -a version of C4.5-and Sequential Minimal Optimization (SMO)) were included and compared with large user profile data.…”
Section: Classification and Clustering Algorithms For User Profilingmentioning
confidence: 99%
“…ID3 is one of the popular DT algorithms that deal with nominal data sets, does not deal with missing values [16]. ID3 is the classical version of the decision tree induction and its improved versions are; SPRINT, SLIQ, and CART.…”
Section: Introductionmentioning
confidence: 99%
“…The researchers mentioned the weaknesses points of DT; it has high error rates when the training set contains a small number of instances of a large variety of different classes, DT algorithm may not work well on data sets when attribute split in any other shape exist. Decision Trees are automatically quite expensive to build.ID3 is one of the popular DT algorithms that deal with nominal data sets, does not deal with missing values [16]. ID3 is the classical version of the decision tree induction and its improved versions are; SPRINT, SLIQ, and CART.…”
mentioning
confidence: 99%