1973
DOI: 10.1109/tit.1973.1055003
|View full text |Cite
|
Sign up to set email alerts
|

Optimization of k nearest neighbor density estimates

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0

Year Published

2006
2006
2019
2019

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 170 publications
(72 citation statements)
references
References 7 publications
0
72
0
Order By: Relevance
“…1 The set of labeled examples is typically very small compared with the set of unlabeled exam-3 ples. Based on such information, Sinkkonen and Kaski [10] proposed a local metric learning method to improve clus-5 tering and visualization results.…”
Section: Article In Pressmentioning
confidence: 99%
See 1 more Smart Citation
“…1 The set of labeled examples is typically very small compared with the set of unlabeled exam-3 ples. Based on such information, Sinkkonen and Kaski [10] proposed a local metric learning method to improve clus-5 tering and visualization results.…”
Section: Article In Pressmentioning
confidence: 99%
“…Instead of choosing the metric manually, 31 a promising approach is to learn the metric from data automatically. This idea can be dated back to some early work on 33 optimizing the metric for k-nearest neighbor density estimation [1]. Later, optimal local metric [2] and optimal global 35 metric [3] were also developed for nearest neighbor classi-37 fication.…”
Section: Introduction 23mentioning
confidence: 99%
“…Alippi and Roveri [2,3] demonstrate how to modify the kNN algorithm for use in the streaming case. First, they demonstrate how to appropriately choose k in a data stream which does not exhibit concept drift based on theoretical results from Fukunga [33]. With this framework, they describe how to update the kNN classifier when no concept drift is detected (add new instances to the knowledge base), and when concept drift is detected (remove obsolete examples from the knowledge base).…”
Section: K-nearest Neighbors Based Methodsmentioning
confidence: 99%
“…In our first experiment, we theoretically compute the optimal choice of k for a fixed partition with M = 3.5×10 4 and N = 1.5×10 4 . We then show the variation of the theoretical and experimental M.S.E.…”
Section: Optimal Selection Of Free Parametersmentioning
confidence: 99%