2015
DOI: 10.1007/978-3-319-22053-6_9
|View full text |Cite
|
Sign up to set email alerts
|

PSO-Based Method for SVM Classification on Skewed Data-Sets

Abstract: Abstract. Support Vector Machines (SVM) have shown excellent generalization power in classification problems. However, on skewed data-sets, SVM learns a biased model that affects the classifier performance, which is severely damaged when the unbalanced ratio is very large. In this paper, a new external balancing method for applying SVM on skewed data sets is developed. In the first phase of the method, the separating hyperplane is computed. Support vectors are then used to generate the initial population of PS… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…The values selection is the same in all four above methods in order to make comparison fairly (e.g., iteration = 30 and Pop Size = 30). For the PSO, the parameters were fixed with the values given in the literature [22,26] (i.e., = 0:9, = 0:5, and = 1:25). For CMAES, the parameters were fixed with the values given in the literature [24] (i.e., = 0:25 and = 4 + 3log( )).…”
Section: Resultsmentioning
confidence: 99%
“…The values selection is the same in all four above methods in order to make comparison fairly (e.g., iteration = 30 and Pop Size = 30). For the PSO, the parameters were fixed with the values given in the literature [22,26] (i.e., = 0:9, = 0:5, and = 1:25). For CMAES, the parameters were fixed with the values given in the literature [24] (i.e., = 0:25 and = 4 + 3log( )).…”
Section: Resultsmentioning
confidence: 99%
“…We classified our datasets using decision tree classification algorithm, which is called J48 in WEKA. The classification stage was conducting using 5-folds cross-validation to have enough positive minority class instances in every fold to minimize the data distribution problems [20], [21]. As we are dealing with unbalanced data set, evaluating the classifier with classification accuracy alone do not give a good overview about the classifier accuracy in predicting the minority class, instead a confusion matrix such as Table II is used for checking different metrics to get a good overview about the classifier prediction power [22].…”
Section: Discussionmentioning
confidence: 99%
“…In PSO, each particle can be represented as a candidate solution (position) in the search space. The particles fly through the search space by their own efforts and in cooperation with other particles and they follow the best solutions they have achieved (local best solutions) as well as tracking the best solutions that they found (the best global solution) (Cervantes et al, 2017;Lai et al, 2016;Mirjalili and Lewis, 2013;Wen et al, 2011).…”
Section: J Eng Applied Sci 15 (1): 310-318 2020mentioning
confidence: 99%