Proceedings. IEEE International Symposium on Information Theory
DOI: 10.1109/isit.1993.748670
|View full text |Cite
|
Sign up to set email alerts
|

On the Finite Sample Performance of the Nearest Neighbor Classifier

Abstract: The finite sample performance of a nearest neighbor classifier is analyzed for a two-class pattern recognition problem. An exact integral expression is derived for the m-sample risk R,giveii that a reference m-sample of labeled points, drawn independently from Euclidean n-space according to a fixed probability distribution, is available to the classifier. (m-'In) if the class-conditional probability densities have uniformly bounded third derivatives on their probability one support. This analysis thus provid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(26 citation statements)
references
References 0 publications
1
25
0
Order By: Relevance
“…It can be Two phenomena also can be observed. All datasets share general tendency that classification performance improves monotonically with the increase of the training sample size and this is consistent with Psaltis et al's statement (Psaltis, Snapp, & Venkatesh, 1994). In most general case, the most common distance is the Euclidean distance that assumes the data has a Gaussian isotropic distribution.…”
Section: Uci Standard Data Setssupporting
confidence: 87%
“…It can be Two phenomena also can be observed. All datasets share general tendency that classification performance improves monotonically with the increase of the training sample size and this is consistent with Psaltis et al's statement (Psaltis, Snapp, & Venkatesh, 1994). In most general case, the most common distance is the Euclidean distance that assumes the data has a Gaussian isotropic distribution.…”
Section: Uci Standard Data Setssupporting
confidence: 87%
“…(1996), chapter 7, and Yang (1999), who showed that optimal convergence rates can nevertheless be arbitrarily slow. Cover (1968), Fukunaga and Hummels (1987) and Psaltis et al . (1994) have shown that, in d ‐variate settings, the risk of nearest neighbour classifiers converges to its limit at rate n −2/ d .…”
Section: Introductionmentioning
confidence: 99%
“…Examples can be found in [17], [18], [19], [20], [21], [22]. The theoretical analysis of the convergence of Random KNN turns out to be challenging.…”
Section: B Error Rate Analysismentioning
confidence: 99%