2018
DOI: 10.1109/tit.2018.2822267
|View full text |Cite
|
Sign up to set email alerts
|

Near-Optimal Sample Compression for Nearest Neighbors

Abstract: We present the first sample compression algorithm for nearest neighbors with non-trivial performance guarantees. We complement these guarantees by demonstrating almost matching hardness lower bounds, which show that our performance bound is nearly optimal. Our result yields new insight into margin-based nearest neighbor classification in metric spaces and allows us to significantly sharpen and simplify existing bounds. Some encouraging empirical results are also presented.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
48
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 36 publications
(49 citation statements)
references
References 36 publications
0
48
0
1
Order By: Relevance
“…langkah pertama memasukkan tujuan ke dalam rute pengiriman, hal pertama yang harus dilakukan adalah mengurutkan nilai terkecil yang telah diperoleh mulai dari yang terbesar hingga yang terendah [17]. Pada komputasi NN memiliki kinerja yang sangat cepat [18]. NN ditemukan oleh Solomon pada tahun 1987 yang konsepnya adalah mengunjungi lokasi terdekat dari masing-masing lokasi yang sedang dikunjungi [19].…”
Section: Nearest Neighborunclassified
“…langkah pertama memasukkan tujuan ke dalam rute pengiriman, hal pertama yang harus dilakukan adalah mengurutkan nilai terkecil yang telah diperoleh mulai dari yang terbesar hingga yang terendah [17]. Pada komputasi NN memiliki kinerja yang sangat cepat [18]. NN ditemukan oleh Solomon pada tahun 1987 yang konsepnya adalah mengunjungi lokasi terdekat dari masing-masing lokasi yang sedang dikunjungi [19].…”
Section: Nearest Neighborunclassified
“…In particular, we employ smoothness concepts from Chaudhuri and Dasgupta [10] and Kpotufe [20]. A related line of work in the literature are the Bayes consistent 1-nearest neighbour methods [14,15,18,19]. These works are focused upon classification, rather than optimisation, and as such do not make the smoothness assumptions required by [10] and the present work.…”
Section: Related Workmentioning
confidence: 99%
“…Surveying the research in this applied area is beyond the scope of our research. We refer to Biniaz et al [5] and Gottlieb et al [16] for some of the latest algorithmic results on this topic. Considering each class as a Voronoi cell, the inverse Voronoi problem is asking precisely whether there exists a consistent subset with one element per class.…”
Section: Introductionmentioning
confidence: 99%