2015
DOI: 10.1016/j.patcog.2014.11.015
|View full text |Cite
|
Sign up to set email alerts
|

Improving kNN multi-label classification in Prototype Selection scenarios using class proposals

Abstract: Prototype Selection (PS) algorithms allow a faster Nearest Neighbor classification by keeping only the most profitable prototypes of the training set. In turn, these schemes typically lowers the performance accuracy. In this work a new strategy for multi-label classifications tasks is proposed to solve this accuracy drop without the need of using all the training set. For that, given a new instance, the PS algorithm is used as a fast recommender system which retrieves the most likely classes. Then, the actual … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
29
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 63 publications
(29 citation statements)
references
References 26 publications
0
29
0
Order By: Relevance
“…Since trying to maintain the same accuracy as with the initial training set is difficult to fulfill in practical scenarios, much research has been recently devoted to enhance this process through the combination with other techniques. Some of these include Feature Selection [35], Ensemble methods [20] or modifications to the kNN rule [7].…”
Section: Background On Data Reductionmentioning
confidence: 99%
“…Since trying to maintain the same accuracy as with the initial training set is difficult to fulfill in practical scenarios, much research has been recently devoted to enhance this process through the combination with other techniques. Some of these include Feature Selection [35], Ensemble methods [20] or modifications to the kNN rule [7].…”
Section: Background On Data Reductionmentioning
confidence: 99%
“…The k-Nearest Neighbors classifier frequently used in Case-based reasoning remains as (i) one of the most well-known algorithms for supervised non-parametric classification (Duda et al, 2001) in Pattern Recognition, data mining and Case based maintenance (Calvo-Zaragoza et al, 2014;Ferrandiz and Boullé, 2010) (ii) as a benchmark for experimental studies in machine learning (Leyva et al 2015).…”
Section: Case-based Maintenancementioning
confidence: 99%
“…There are numerous new research studies on the first type of method, including the MC-SVMA model [8], the K-class support vector classification-regression (K-SVCR) method [9], the learning vector quantization (LVQ) method [10], the improving prototype selection k-nearest neighbor (PS-KNN) method [11], Bayesian network-based chain classifiers [12], the BPSO-AdaBoost-KNN ensemble learning algorithm [13], and the hyper-sphere multi-class SVM (HSMC-SVM) method [14]. Among these, the algorithm that improved over the SVM principle for classifying multiple targets is the most widely applied.…”
Section: Introductionmentioning
confidence: 99%