DOI: 10.1007/978-3-540-74976-9_16
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Comparison of Exact Nearest Neighbour Algorithms

Abstract: Abstract. Nearest neighbour search (NNS) is an old problem that is of practical importance in a number of fields. It involves finding, for a given point q, called the query, one or more points from a given set of points that are nearest to the query q. Since the initial inception of the problem a great number of algorithms and techniques have been proposed for its solution. However, it remains the case that many of the proposed algorithms have not been compared against each other on a wide variety of datasets.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
37
0
1

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 53 publications
(39 citation statements)
references
References 11 publications
1
37
0
1
Order By: Relevance
“…For instance, when building a neighbourhood graph, the classical algorithm for kd-trees computes this graph so efficiently that quantizing the classical algorithm working only on the distances offers no significant advantage if the dimension d is small (Kibriya and Frank 2007). A fundamental open question is the study of the lower bounds for different clustering scenarios, be they classical or quantum mechanical.…”
Section: Fair Comparison Between Classical and Quantum Learning Algormentioning
confidence: 99%
“…For instance, when building a neighbourhood graph, the classical algorithm for kd-trees computes this graph so efficiently that quantizing the classical algorithm working only on the distances offers no significant advantage if the dimension d is small (Kibriya and Frank 2007). A fundamental open question is the study of the lower bounds for different clustering scenarios, be they classical or quantum mechanical.…”
Section: Fair Comparison Between Classical and Quantum Learning Algormentioning
confidence: 99%
“…Such a choice will yield low Bayes' error during classification. Furthermore, higher dimension (d) of the feature space (d ≥ 20) affects the optimal choice of k and hence the accuracy of the classifier [17]. It has been also observed that higher values of k make the kNN algorithm more robust to the outliers and also generate smoother class boundaries.…”
Section: Introductionmentioning
confidence: 93%
“…We use Euclidean distance measuring as distance function as the following equation (10) [11]. n D(x, y) = L Jx; -Y;, (10) i=O Where D (x, y) is the distance function, x is the query sample, y is the sample from the training set, n is feature dimension.…”
Section: ) K-nearest Neighbor (Knn)mentioning
confidence: 99%