2022
DOI: 10.1137/20m1371324
|View full text |Cite
|
Sign up to set email alerts
|

Optimal (Euclidean) Metric Compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…The above mentioned "binary" version of the Johnson-Lindenstrauss lemma due to [1] (where the entries of the matrix are all either +1 or −1) is particularly important for the following reason. As discussed in [8], an alternate way to convert sketches obtained by the Johnson-Lindenstrauss lemma to bits is possible if the points have bounded integer coordinates and one uses the binary variant of Johnson-Lindenstrauss lemma. This approach is somewhat incomparable to our setting because of the integrality assumption.…”
Section: Previous Variations On Random Projectionmentioning
confidence: 99%
See 2 more Smart Citations
“…The above mentioned "binary" version of the Johnson-Lindenstrauss lemma due to [1] (where the entries of the matrix are all either +1 or −1) is particularly important for the following reason. As discussed in [8], an alternate way to convert sketches obtained by the Johnson-Lindenstrauss lemma to bits is possible if the points have bounded integer coordinates and one uses the binary variant of Johnson-Lindenstrauss lemma. This approach is somewhat incomparable to our setting because of the integrality assumption.…”
Section: Previous Variations On Random Projectionmentioning
confidence: 99%
“…However, this is not the optimal number of bits. It was recently shown in [7,8] that if the points are contained in the unit ball and m is the minimum distance between points, then O ǫ −2 n log n + n log log(1/m) bits suffice. Previous to this result, the best known result was that O ǫ −2 n log n log(1/m) bits suffice [10].…”
Section: Distance Compression Beyond Random Mappingsmentioning
confidence: 99%
See 1 more Smart Citation
“…1 for an example. Even in the theoretical literature on optimal vector compression, such clustering plays a crucial role (Indyk & Wagner, 2022). All these quantization methods share one obvious drawback compared to hashing: the model is only quantized after training, thus memory utilization during training is unaffected.…”
Section: Small Clustered Embedding Tablementioning
confidence: 99%
“…We also don't cover the background of "post training" quantization, but refer to the survey, Gray & Neuhoff (1998). Finally, we keep things reasonably heuristic, but for a deep theoretical understanding of metric compression, we recommend Indyk & Wagner (2022). In this framework we can also express most other approaches to training-time table compression:…”
Section: Background and Related Workmentioning
confidence: 99%