1967
DOI: 10.1109/tit.1967.1053964
|View full text |Cite
|
Sign up to set email alerts
|

Nearest neighbor pattern classification

Abstract: for some very helpful discussions, and Prof. D. E. Troxel for his help in designing the sensory display.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

11
5,561
0
123

Year Published

1975
1975
2020
2020

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 12,289 publications
(5,695 citation statements)
references
References 4 publications
11
5,561
0
123
Order By: Relevance
“…For the formulation of predictive models, a search for the optimal models was conducted by applying several machine learning techniques, including Multivariate Logistic Regression Analysis (MLRA), Artificial Neural Network (ANN), Decision Tree (DT) and Nearest Neighborhood (KNN) (27)(28)(29). These statistical methods have been described previously in the field of hepatic diseases [30][31][32].…”
Section: Development Of Predictive Models Using Data Mining Analysismentioning
confidence: 99%
“…For the formulation of predictive models, a search for the optimal models was conducted by applying several machine learning techniques, including Multivariate Logistic Regression Analysis (MLRA), Artificial Neural Network (ANN), Decision Tree (DT) and Nearest Neighborhood (KNN) (27)(28)(29). These statistical methods have been described previously in the field of hepatic diseases [30][31][32].…”
Section: Development Of Predictive Models Using Data Mining Analysismentioning
confidence: 99%
“…Authors would like to express that no outlier has been artificially added into the problems. Three well-known machine learning algorithms, namely C4.5 [14], I-nearest neighbour (l-NN) [15] and SVM (Support Vector Machines) [16], have been tested with the original data sets, with the data sets after the application of OUTLIERoutF and once OUTLIERoutP has been carried out. We have used the implementations of the three aforementioned classifiers provided in WEKA tool [17] with the default parameters that are those recommended by the own authors of the algorithms when the corresponding code was released.…”
Section: Experimental Settingmentioning
confidence: 99%
“…In a binary recognition setting, the algorithm is provided with two datasets: a reference dataset C 0 containing samples of one class only, and a mixed dataset C 1 containing an unlabelled collection of samples. The algorithm then exploits the asymptotic properties of NearestNeighbourhood classification on randomly selected reference sets (Cover and Hart, 1967). Specifically, it estimates the posterior probabilities Pr{C 0 |x} and Pr{C 1 |x} using an average number of times a point x is assigned to C 0 or C 1 with respect to a random reference set containing n points from each collection.…”
Section: Quasi-supervised Learningmentioning
confidence: 99%