1975
DOI: 10.1109/tit.1975.1055464
|View full text |Cite
|
Sign up to set email alerts
|

An algorithm for a selective nearest neighbor decision rule (Corresp.)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
153
0

Year Published

2000
2000
2016
2016

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 300 publications
(153 citation statements)
references
References 8 publications
0
153
0
Order By: Relevance
“…This is the case of Condensed Nearest Neighbor (CNN) [104], one of the oldest, the SNN (Selective Nearest Neighbor rule) method [197], and the Generalized Condensed Nearest Neighbor rule (GCNN) [44]. Other early instance selection methods focused on discarding noisy instances in the training set, like the Edited Nearest Neighbor method [226] and some variations like the all k-NN method [215] and the Multiedit method [55].…”
Section: Wrapper Instance Selectionmentioning
confidence: 99%
“…This is the case of Condensed Nearest Neighbor (CNN) [104], one of the oldest, the SNN (Selective Nearest Neighbor rule) method [197], and the Generalized Condensed Nearest Neighbor rule (GCNN) [44]. Other early instance selection methods focused on discarding noisy instances in the training set, like the Edited Nearest Neighbor method [226] and some variations like the all k-NN method [215] and the Multiedit method [55].…”
Section: Wrapper Instance Selectionmentioning
confidence: 99%
“…This technique is very sensitive to noise and to the order of presentation of the training set cases. Ritter [10] reported improvements on the CNN with his Selective Nearest Neighbour (SNN) which imposes the rule that every case in the training set must be closer to a case of the same class in the edited set than to any other training case of a different class. Gates [11] introduced a decremental technique which starts with the edited set equal to the training set and removes a case from the edited set where its removal does not cause any other training case to be misclassified.…”
Section: Early Techniquesmentioning
confidence: 99%
“…As will be shown below CNN condenses on the average the number of vectors three times. The performance of CNN algorithm is not good, but this model inspired construction of new methods such as SNN by Ritter et al [5], RNN by Gates [6] or ENN by Wilson [7]. A group of three algorithms were inspired by encoding length principle [8].…”
Section: Introductionmentioning
confidence: 99%