2018
DOI: 10.1007/s00454-017-9964-x
|View full text |Cite
|
Sign up to set email alerts
|

Fast Binary Embeddings with Gaussian Circulant Matrices: Improved Bounds

Abstract: We consider the problem of encoding a finite set of vectors into a small number of bits while approximately retaining information on the angular distances between the vectors. By deriving improved variance bounds related to binary Gaussian circulant embeddings, we largely fix a gap in the proof of the best known fast binary embedding method. Our bounds also show that well-spreadness assumptions on the data vectors, which were needed in earlier work on variance bounds, are unnecessary. In addition, we propose a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 13 publications
(22 reference statements)
0
11
0
Order By: Relevance
“…The setup and results are related to ours, but bear several important differences, including (i) the use of regularization; (ii) the focus on sub-Gaussian noise added before quantization; (iii) the use of dithering-type thresholds in the measurements; (iv) the consideration of sparse signals instead of generative priors; and (v) the use of a random (rather than fixed) index set in the partial circulant matrix. Some theoretical guarantees for circulant binary embeddings, which are closely related to 1-bit CS with partial Gaussian circulant matrices, have been presented in [24], [33]- [35]. On the other hand, based on successful applications of deep generative models, instead of assuming the signal vector is sparse, it has been recently popular to assume that the signal vector lies in the range of a generative model in the problem of CS [3], [36]- [43].…”
Section: A Related Workmentioning
confidence: 99%
“…The setup and results are related to ours, but bear several important differences, including (i) the use of regularization; (ii) the focus on sub-Gaussian noise added before quantization; (iii) the use of dithering-type thresholds in the measurements; (iv) the consideration of sparse signals instead of generative priors; and (v) the use of a random (rather than fixed) index set in the partial circulant matrix. Some theoretical guarantees for circulant binary embeddings, which are closely related to 1-bit CS with partial Gaussian circulant matrices, have been presented in [24], [33]- [35]. On the other hand, based on successful applications of deep generative models, instead of assuming the signal vector is sparse, it has been recently popular to assume that the signal vector lies in the range of a generative model in the problem of CS [3], [36]- [43].…”
Section: A Related Workmentioning
confidence: 99%
“…Unlike previous works on quantization under frame or Compressed Sensing measurements (see, e.g., [36,6,28,5,23,29,3,30,21,2,19,27,34,35,25,18,19,20]), where samples are assumed to be taken by random Gaussian/sub-Gaussian or Fourier measurements, here we allow a direct quantization on each pixel and therefore ensure the maximal practicality.…”
Section: Contributionmentioning
confidence: 99%
“…To address the first point above, researchers have tried to design other binary embeddings which can be implemented more efficiently. For example, such embeddings include circulant binary embedding [72,53,24], structured hashed projections [16], binary embeddings based on Walsh-Hadamard matrices and partial Gaussian Toeplitz matrices with random column flips [71]. To our knowledge, all of these results do not address the second point above; they use the sign function (1.3) (which is an instance of Memoryless Scalar Quantization) to quantize the measurements.…”
Section: Drawback (Ii)mentioning
confidence: 99%