2009
DOI: 10.4064/sm195-3-3
|View full text |Cite
|
Sign up to set email alerts
|

Generalizing the Johnson–Lindenstrauss lemma to k-dimensional affine subspaces

Abstract: Abstract. Let ε > 0 and 1 ≤ k ≤ n and let {W l } p l=1 be affine subspaces of R n , each of dimension at most k. Let m = O(ε −2 (k + log p)) if ε < 1, and m = O(k + log p/log(1 + ε)) if ε ≥ 1. We prove that there is a linear map H : R n → R m such that for all 1 ≤ l ≤ p and x, y ∈ W l we have x − y 2 ≤ H(x) − H(y) 2 ≤ (1 + ε) x − y 2, i.e. the distance distortion is at most 1 + ε. The estimate on m is tight in terms of k and p whenever ε < 1, and is tight on ε, k, p whenever ε ≥ 1. We extend these results to e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…[8,[10][11][12]14,17,24,[28][29][30]). This list of course does not include the enormous quantity of published works which deal with evaluations and applications of max and min associated with various random parameters, e.g., smallest and largest eigenvalues of random matrices, as these are the extreme values in the scale of order statistics.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…[8,[10][11][12]14,17,24,[28][29][30]). This list of course does not include the enormous quantity of published works which deal with evaluations and applications of max and min associated with various random parameters, e.g., smallest and largest eigenvalues of random matrices, as these are the extreme values in the scale of order statistics.…”
Section: Introductionmentioning
confidence: 99%
“…To name only a few: Wireless networks, signal processing, image processing, compressed sensing, data reconstruction, learning theory and data mining. A sample of works done in this area are [2,4,6,8,9,10,12,13,17,31,33].…”
Section: Introductionmentioning
confidence: 99%
“…Remark 37. The JL lemma was reproved many times; see [94,102,121,23,75,2,136,122,166,3,82,138,4,83,133,49,81], though we make no claim that this is a comprehensive list of references. There were several motivations for these further investigations, ranging from the desire to obtain an overall better understanding of the JL phenomenon, to obtain better bounds, and to obtain distributions on random matrices A as in (31) with certain additional properties that are favorable from the computational perspective, such as ease of simulation, use of fewer random bits, sparsity, and the ability to evaluate the mapping (z ∈ R n−1 ) → Az quickly (akin to the fast Fourier transform).…”
Section: Proof Of Proposition 35mentioning
confidence: 99%
“…standard Gaussian entries. We recall another of the main results in [7] which holds for Gaussian matrices G for large and small distortions: Theorem 1.1 There is a positive constant c such that the following holds: Given 0 < ε < ∞ and 1 ≤ k ≤ n and p subspaces {W l } p l=1 of dimension at most k in ℓ n 2 and any m ≥ c (1 + ε −2 )k + 1+ε ε ln(1+ε) ln p and a Gaussian m × n matrix G with i.i.d. standard Gaussian entries, there is a number E such that the probability that for every 1 ≤ l ≤ p and x, y ∈ W l…”
Section: Introductionmentioning
confidence: 95%
“…The paper is concerned with large controlled distance distortion of preassigned magnitude D by random linear maps H ∈ L(R n , R m ) where H is a matrix whose rows satisfy certain conditions. To cite one example of large distortion is Theorem 2.8 in [7] where G = (g j,i ) m j=1 n i=1 is a Gaussian matrix with i.i.d. standard Gaussian entries.…”
Section: Introductionmentioning
confidence: 99%