2019
DOI: 10.1007/s00453-019-00644-y
|View full text |Cite
|
Sign up to set email alerts
|

On Using Toeplitz and Circulant Matrices for Johnson–Lindenstrauss Transforms

Abstract: The Johnson-Lindenstrauss lemma is one of the corner stone results in dimensionality reduction. It says that given N , for any set of N vectors X ⊂ R n , there exists a mapping f : X → R m such that f (X) preserves all pairwise distances between vectors in X to within (1 ± ε) if m = O(ε −2 lg N ). Much effort has gone into developing fast embedding algorithms, with the Fast Johnson-Lindenstrauss transform of Ailon and Chazelle being one of the most well-known techniques. The current fastest algorithm that yiel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…While it might seem at a glance that bounding the high-order moments of X(x) is merely a technical issue, known tools and techniques could not be used to prove Lemmas 3, 4. Particularly, earlier work by Kane and Nelson [KN14,CJN18] and Freksen and Larsen [FL17] used high-order moments bounds as a step in proving probability tail bounds of random variables. The existing techniques, however, can not be adopted to bound high-order moments of X(x) (see also Section 1.2), and novel approaches were needed.…”
Section: Resultsmentioning
confidence: 99%
“…While it might seem at a glance that bounding the high-order moments of X(x) is merely a technical issue, known tools and techniques could not be used to prove Lemmas 3, 4. Particularly, earlier work by Kane and Nelson [KN14,CJN18] and Freksen and Larsen [FL17] used high-order moments bounds as a step in proving probability tail bounds of random variables. The existing techniques, however, can not be adopted to bound high-order moments of X(x) (see also Section 1.2), and novel approaches were needed.…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, Assumption 6 is also satisfied by certain structured random matrices that enable efficient computation of the compressive mapping, most notably the Fast Johnson-Lindenstrauss transform (Ailon and Chazelle, 2006), and random matrices that exploit sparse matrix multiplications (Kane and Nelson, 2014). The quest for developing efficient Johnson-Lindenstrauss transforms is currently an active research area; some constructions require a slightly larger target dimension in exchange of greater savings in computation time (Freksen and Larsen, 2020). However, the target dimension of order log(q)ǫ −2 is known to be optimal in that any transform that satisfies Assumption 6 uniformly over any set of q points must have a target dimension of this order (Larsen and Nelson, 2017).…”
Section: Main Upper Boundmentioning
confidence: 99%
“…This includes e.g. five solutions with O(d ln d) embedding time, but different sub-optimal k = O(ε −2 ln n ln 4 d) [16], k = O(ε −2 ln 3 n) [6], k = O(ε −1 ln 3/2 n ln 3/2 d + ε −2 ln n ln 4 d) [16], k = O(ε −2 ln 2 n) [9,23,8] and k = O(ε −2 ln n ln 2 (ln n) ln 3 d) [12], respectively. The second category is solutions where one assumes that k is significantly smaller than d.…”
Section: Introductionmentioning
confidence: 99%