Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms 2020
DOI: 10.1137/1.9781611975994.180
|View full text |Cite
|
Sign up to set email alerts
|

Sublinear time approximation of the cost of a metric k-nearest neighbor graph

Abstract: How to cite:Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 28 publications
(43 reference statements)
0
3
0
Order By: Relevance
“…Noting from Algorithm 5 that f • n = Xn/k, inequality (12) implies that with probability 1 − 2/n 4 , To achieve the high probability bound on the time-complexity, as in the proofs of Theorems 1.2 and 1.3, we run Θ(log n) instances of Algorithm 5 in parallel and return the output of the instance that terminates first. By Markov's inequality, each instance terminates in time O(n/ε 3 ) with a constant probability.…”
Section: The Final Algorithm For the Adjacency Matrix Query Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Noting from Algorithm 5 that f • n = Xn/k, inequality (12) implies that with probability 1 − 2/n 4 , To achieve the high probability bound on the time-complexity, as in the proofs of Theorems 1.2 and 1.3, we run Θ(log n) instances of Algorithm 5 in parallel and return the output of the instance that terminates first. By Markov's inequality, each instance terminates in time O(n/ε 3 ) with a constant probability.…”
Section: The Final Algorithm For the Adjacency Matrix Query Modelmentioning
confidence: 99%
“…Although at the first glance it may seem impossible to do much without reading the whole input, numerous sublinear-time algorithms have been designed over the years for various optimization problems. In addition to matching and vertex cover, which have been studied extensively in the area [25,23,26,24,19,9], the list includes estimating the weight/size of minimum spanning tree (MST) [8,10], traveling salesman problem (TSP) [9], k-nearest neighbor graph [12], graph's average degree [15,18], as well as problems such as vertex coloring [2], metric linear sampling [13], and many others. (This is by no means a comprehensive list of all the prior works.)…”
Section: Introductionmentioning
confidence: 99%
“…The K-Nearest Neighbor method uses data classification techniques that are divided into clusters (Agrawal, 2019). Prediction results can be calculated based on the distance closest to the sample data (Gou et al, 2019) (Czumaj & Sohler, 2020). This research will predict the graduation time of the students with the K-Nearest Neighbor algorithm.…”
Section: Introductionmentioning
confidence: 99%