2019
DOI: 10.1007/978-3-030-36412-0_29
|View full text |Cite
|
Sign up to set email alerts
|

A True $$O(n\log {n})$$ Algorithm for the All-k-Nearest-Neighbors Problem

Abstract: In this paper we examined an algorithm for the All-k-Nearest-Neighbor problem proposed in 1980s, which was claimed to have an O(n log n) upper bound on the running time. We find the algorithm actually exceeds the so claimed upper bound, and prove that it has an Ω(n 2 ) lower bound on the time complexity. Besides, we propose a new algorithm that truly achieves the O(n log n) bound. Detailed and rigorous theoretical proofs are provided to show the proposed algorithm runs exactly in O(n log n) time.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Further, the cost of computing the low-lying spectrum of the graph Laplacian using iterative methods scales as 40 , 41 . However, both these costs can be reduced by setting a cutoff on the number of edges per node on the graph using, for example, an efficient implementation of the nearest neighbors algorithm 42 . In this case, the cost of the specMF algorithm scales as , where is the cutoff on the maximum number of neighbors for every node.…”
Section: Discussionmentioning
confidence: 99%
“…Further, the cost of computing the low-lying spectrum of the graph Laplacian using iterative methods scales as 40 , 41 . However, both these costs can be reduced by setting a cutoff on the number of edges per node on the graph using, for example, an efficient implementation of the nearest neighbors algorithm 42 . In this case, the cost of the specMF algorithm scales as , where is the cutoff on the maximum number of neighbors for every node.…”
Section: Discussionmentioning
confidence: 99%
“…For example, k-d tree has average O(log N ) complexity to find the nearest neighbor of a sample in randomly distributed datasets [47]. Moreover, for Euclidean space, it is possible to find k nearest neighbors of every sample in O(k 2 N ) time [48].…”
Section: B Nearest Neighbor Search Methodsmentioning
confidence: 99%