2014
DOI: 10.14445/22312803/ijctt-v17p109
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Analysis of Algorithms in Supervised Classification: A Case study of Bank Notes Dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
7
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…As the ten datasets are well known and are frequently used for evaluation of classification methods (Ghazvini et al 2014;Duch et al 2012;Tao et al 2004), we compare the performance of the δ-machine against the previous results. In general, there was not a single classification method that had significantly best results for all these datasets in the past, because each classification method relies on assumptions, estimators, or approximations.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As the ten datasets are well known and are frequently used for evaluation of classification methods (Ghazvini et al 2014;Duch et al 2012;Tao et al 2004), we compare the performance of the δ-machine against the previous results. In general, there was not a single classification method that had significantly best results for all these datasets in the past, because each classification method relies on assumptions, estimators, or approximations.…”
Section: Resultsmentioning
confidence: 99%
“…Any classification method can achieve the best performance if there is an ideal dataset that fulfills these (Duin et al 2010). The three studies carried out by Duch et al (2012), Tao et al (2004), and Ghazvini et al (2014) had seven, two, and one out of ten datasets overlapping with our ten datasets, respectively. From these studies, we conclude that in Ghazvini et al (2014), and Tao et al (2004) five out of the ten datasets, SVM(RBF) had achieved the lowest MR. SVM with a linear kernel, multilayer perceptron networks (Hornik et al 1989), the iterative axis-parallel rectangle algorithm (Dietterich et al 1997) and kernel-based multiple-instance learning model (Tao et al 2004) had one best result each.…”
Section: Resultsmentioning
confidence: 99%
“…Some paper has the same goal to measure accuracy in each method used for classification, especially classification for banknotes. These methods are Naïve Bayes and Multilayer Perceptron [6]; Probabilistic neural network (PNN), Multi-layer Perceptron (MLP), Radial Basis Function (RBF), Decision Tree (DT), and Naïve Base [3]; Backpropagation Neural Network (BPN) and Support Vector Machine (SVM) [1].…”
Section: Literature Reviewmentioning
confidence: 99%
“…The level accuracy of each method is used differently. In the Naïve Bayes method and Multilayer Perceptron method, the accuracy rate is 95% and 97%, so the Multilayer Perceptron method outperforms the Naïve Bayes method [6]. Whereas in the comparison of the Probabilistic neural network (PNN), Multi-layer Perceptron (MLP), Radial Basis Function (RBF), Decision Tree (DT), and Naïve Bayes method, the best accuracy of the method is Decision Tree (DT) method.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation