Abstract:We present a randomized method to approximate any vector v from some set T ⊂ R n . The data one is given is the set T , vectors (X i ) k i=1 of R n and k scalar products (, where (X i ) k i=1 are i.i.d. isotropic subgaussian random vectors in R n , and k n. We show that with high probability, any y ∈ T for which ( X i , y ) k i=1 is close to the data vector ( X i , v ) k i=1 will be a good approximation of v, and that the degree of approximation is determined by a natural geometric parameter associated with the set T .We also investigate a random method to identify exactly any vector which has a relatively short support using linear subgaussian measurements as above. It turns out that our analysis, when applied to {−1, 1}-valued vectors with i.i.d, symmetric entries, yields new information on the geometry of faces of random {−1, 1}-polytope; we show that a k-dimensional random {−1, 1}-polytope with n vertices is m-neighborly for very large m ≤ ck/ log(c n/k).The proofs are based on new estimates on the behavior of the empirical process sup f ∈F kThe estimates are given in terms of the γ 2 functional with respect to the ψ 2 metric on F , and hold both in exponential probability and in expectation.
IntroductionThe aim of this article is to investigate the linear "approximate reconstruction" problem in R n . In such a problem, one is given a set T ⊂ R n and the goal is to be able to approximate any unknown v ∈ T using random linear measurements. In other words, one is given the set of values (, where X 1 , ..., X k are given independent random vectors in R n selected according to some probability measure µ. Using this information (and the fact that the unknown vector v belongs to T ) one has to produce, with very high probability with respect to µ k , some t ∈ T , such that the Euclidean norm |t − v| ≤ ε(k) for ε(k) as small as possible. Of course, the random sampling method has to be "universal" in some sense and not tailored to a specific set T ; and it is natural to expect that the degree of approximation ε(k) depends on some geometric parameter associated with T .Questions of a similar flavor have been thoroughly studied in approximation theory for the purpose of computing Gelfand numbers (see in particular [Ka, GG] when T is the unit ball in n 1 ), in the asymptotic theory of Banach spaces for an analysis of low-codimensional sections (see [Mi,PT1]) and in the form and language presented above in nonparametric statistics and statistical learning theory in [MT] (for more information, see for example, [BBL] and [M] and references therein). This particular problem has been addressed by several authors with view of application to signal reconstruction (see [CT1,CT2,CT3] for the most recent contributions), in the following context: The sets considered were either the unit ball in n 1 or the unit balls in weak n p spaces for 0 < p < 1 -and the proofs of the approximation estimates depended on the choice of those particular sets. The sampling process was done when X i were distributed according to the G...