2012
DOI: 10.1016/j.patrec.2011.07.019
|View full text |Cite
|
Sign up to set email alerts
|

New rank methods for reducing the size of the training set using the nearest neighbor rule

Abstract: 5Some new rank methods to select the best prototypes from a training set are proposed in this paper in order to establish its size according to an external parameter, while maintaining the classification accuracy. The traditional methods that filter the training set in a classification task like editing or condensing have some rules that apply to the set in order to remove outliers or keep some prototypes that help in the classification. In our approach, new voting methods are proposed to compute the prototype… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
24
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 27 publications
(24 citation statements)
references
References 12 publications
0
24
0
Order By: Relevance
“…-Farther Neighbor (FN) [31]: gives a probability mass value to each prototype following a voting heuristic based on neighborhood. Prototypes are selected according to a parameter (fixed to 0.3 in our case) that indicates the probability mass desired for each class in the reduced set.…”
Section: Prototype Selection (Ps) Algorithmsmentioning
confidence: 99%
“…-Farther Neighbor (FN) [31]: gives a probability mass value to each prototype following a voting heuristic based on neighborhood. Prototypes are selected according to a parameter (fixed to 0.3 in our case) that indicates the probability mass desired for each class in the reduced set.…”
Section: Prototype Selection (Ps) Algorithmsmentioning
confidence: 99%
“…Both strategies are based on the idea that a prototype can give one vote to another one, and the question is to decide which prototype it is given to. Although these strategies were previously published [22], we shall revisit below their main ideas for a better readability of the current paper.…”
Section: Rank Methods For Prototype Selectionmentioning
confidence: 99%
“…We will denote these strategies by n-FN and n-NE. For example, configuration n = 2 will use the second nearest enemy to voting prototype a, which has proved to be useful in practice [22].…”
Section: Nearest To Enemy Votingmentioning
confidence: 99%
“…It computes a fast, orderindependent condensing strategy based on seeking the centroids of each label. -The Nearest to Enemy (NE) rule [22] gives a probability mass value to each prototype following a voting heuristic based on neighboring criteria. Prototypes are selected according to a parameter (fixed to 0.3 in our case) that indicates the probability mass desired for each class in the reduced set.…”
Section: Prototype Selectionmentioning
confidence: 99%