Meta-heuristics are high-level approaches developed to discover a heuristic that provides a reasonable solution to many varieties of optimization problems. The classification problems contain a sort of optimization problem. Simply, the objective herein is to reduce the number of misclassified instances. In this paper, the question of whether meta-heuristic methods can be used to construct linear models or not is answered. To this end, Particle Swarm Optimization (PSO) has been engaged to address linear classification problems. The Particle Swarm Classifier (PSC) with a certain objective function has been compared with Support Vector Machine (SVM), Perceptron Learning Rule (PLR), and Logistic Regression (LR) applied to fifteen data sets. The experimental results point out that PSC can compete with the other classifiers, and it turns out to be superior to other classifiers for some binary classification problems. Furthermore, the average classification accuracies of PSC, SVM, LR, and PLR are 80.8%, 80.6%, 80.9%, and 57.7%, respectively. In order to enhance the classification performance of PSC, more advanced objective functions can be developed. Further, the classification accuracy can be boosted more by constructing tighter constraints via another meta-heuristic.
As discarding superfluous instances in data sets shortens the learning process, it also increases learning performance because of eliminating noisy data. Instance selection methods are commonly utilized to undertake the abovementioned tasks. In this paper, we propose a new supervised instance selection algorithm called Border Instances Reduction using Classes Handily (BIRCH). BIRCH considers k-nearest neighbors of each instance and selects instances that have neighbors from the only same class, namely, but not having neighbors from the different classes. It has been compared with one traditional and four state-of-the-art instance selection algorithms by using fifteen data sets from various domains. The empirical results show BIRCH well delivers the trade-off between accuracy rate and reduction rate by tuning the number of neighbors. Furthermore, the proposed method guarantees to yield a high classification accuracy. The source code of the proposed algorithm can be found in https://github.com/fatihaydin1/BIRCH.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.