While standing as one of the most widely considered and successful supervised classification algorithms, the k-Nearest Neighbor (kNN) classifier generally depicts a poor efficiency due to being an instance-based method. In this sense, Approximated Similarity Search (ASS) stands as a possible alternative to improve those efficiency issues at the expense of typically lowering the performance of the classifier. In this paper we take as initial point an ASS strategy based on clustering. We then improve its performance by solving issues related to instances located close to the cluster boundaries by enlarging their size and considering the use of Deep Neural Networks for learning a suitable representation for the classification task at issue. Results using a collection of eight different datasets show that the combined use of these two strategies entails a significant improvement in the accuracy performance, with a considerable reduction in the number of distances needed to classify a sample in comparison to the basic kNN rule.instance at issue is not part of the cluster being examined, we include it inside the cluster, thus approaching the space partitioning to something similar to a fuzzy clustering; (iv) this process is done for each of the clusters obtained.This strategy increases the likelihood of making all the k-nearest neighbors of a given test instance fall in the same cluster. Also note that both the clustering 50 process and the proposed enlargement are done as a preprocessing stage, thus not affecting the efficiency of the classification process. As it shall be later experimentally checked, this process of increasing the cluster size approaches the brute-force kNN scenario in terms of accuracy with far less computational cost. 55Furthermore, recent advances in feature learning, namely deep learning, have made a breakthrough in the ability to learn suitable features for classification.That is, instead of resorting to hand-crafted features extracted, the models are trained to infer out of the raw input signal the most suitable features for the task at hand. This representational learning is performed by means of Deep 60 Neural Networks (DNN), consisting of a number of layers which are able to
Prototype Selection (PS) algorithms allow a faster Nearest Neighbor classification by keeping only the most profitable prototypes of the training set. In turn, these schemes typically lowers the performance accuracy. In this work a new strategy for multi-label classifications tasks is proposed to solve this accuracy drop without the need of using all the training set. For that, given a new instance, the PS algorithm is used as a fast recommender system which retrieves the most likely classes. Then, the actual classification is performed only considering the prototypes from the initial training set belonging to the suggested classes. Results show this strategy provides a large set of trade-off solutions which fills the gap between PS-based classification efficiency and conventional kNN accuracy. Furthermore, this scheme is not only able to, at best, reach the performance of conventional kNN with barely a third of distances computed, but it does also outperform the latter in noisy scenarios, proving to be a much more robust approach.
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Highlights• Oversampling in the string space for addressing imbalanced classification• Generating new strings between pairs of instances using the Edit distance• Experimentation with contour representations of handwritten digits and characters• Statistical performance improvement of the classifier with respect to imbalanced case
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.