Let K be an isotropic convex body in R n . Given ε > 0, how many independent points X i uniformly distributed on K are needed for the empirical covariance matrix to approximate the identity up to ε with overwhelming probability? Our paper answers this question from [12]. More precisely, let X ∈ R n be a centered random vector with a log-concave distribution and with the identity as covariance matrix. An example of such a vector X is a random point in an isotropic convex body. We show that for any ε > 0, there exists C(ε) > 0, such that if N ∼ C(ε) n and (X i ) i≤N are i.i.d. copies of, with probability larger than 1 − exp(−c √ n).
Abstract:We present a randomized method to approximate any vector v from some set T ⊂ R n . The data one is given is the set T , vectors (X i ) k i=1 of R n and k scalar products (, where (X i ) k i=1 are i.i.d. isotropic subgaussian random vectors in R n , and k n. We show that with high probability, any y ∈ T for which ( X i , y ) k i=1 is close to the data vector ( X i , v ) k i=1 will be a good approximation of v, and that the degree of approximation is determined by a natural geometric parameter associated with the set T .We also investigate a random method to identify exactly any vector which has a relatively short support using linear subgaussian measurements as above. It turns out that our analysis, when applied to {−1, 1}-valued vectors with i.i.d, symmetric entries, yields new information on the geometry of faces of random {−1, 1}-polytope; we show that a k-dimensional random {−1, 1}-polytope with n vertices is m-neighborly for very large m ≤ ck/ log(c n/k).The proofs are based on new estimates on the behavior of the empirical process sup f ∈F kThe estimates are given in terms of the γ 2 functional with respect to the ψ 2 metric on F , and hold both in exponential probability and in expectation. IntroductionThe aim of this article is to investigate the linear "approximate reconstruction" problem in R n . In such a problem, one is given a set T ⊂ R n and the goal is to be able to approximate any unknown v ∈ T using random linear measurements. In other words, one is given the set of values (, where X 1 , ..., X k are given independent random vectors in R n selected according to some probability measure µ. Using this information (and the fact that the unknown vector v belongs to T ) one has to produce, with very high probability with respect to µ k , some t ∈ T , such that the Euclidean norm |t − v| ≤ ε(k) for ε(k) as small as possible. Of course, the random sampling method has to be "universal" in some sense and not tailored to a specific set T ; and it is natural to expect that the degree of approximation ε(k) depends on some geometric parameter associated with T .Questions of a similar flavor have been thoroughly studied in approximation theory for the purpose of computing Gelfand numbers (see in particular [Ka, GG] when T is the unit ball in n 1 ), in the asymptotic theory of Banach spaces for an analysis of low-codimensional sections (see [Mi,PT1]) and in the form and language presented above in nonparametric statistics and statistical learning theory in [MT] (for more information, see for example, [BBL] and [M] and references therein). This particular problem has been addressed by several authors with view of application to signal reconstruction (see [CT1,CT2,CT3] for the most recent contributions), in the following context: The sets considered were either the unit ball in n 1 or the unit balls in weak n p spaces for 0 < p < 1 -and the proofs of the approximation estimates depended on the choice of those particular sets. The sampling process was done when X i were distributed according to the G...
This paper considers compressed sensing matrices and neighborliness of a centrally symmetric convex polytope generated by vectors ±X 1 , . . . , ±X N ∈ R n , (N ≥ n). We introduce a class of random sampling matrices and show that they satisfy a restricted isometry property (RIP) with overwhelming probability. In particular, we prove that matrices with i.i.d. centered and variance 1 entries that satisfy uniformly a sub-exponential tail inequality possess this property RIP with overwhelming probability. We show that such "sensing" matrices are valid for the exact reconstruction process of m-sparse vectors via ℓ 1 minimization with m ≤ Cn/ log 2 (cN/n). The class of sampling matrices we study includes the case of matrices with columns that are independent isotropic vectors with log-concave densities. We deduce that if K ⊂ R n is a convex body and X 1 , . . . , X N ∈ K are i.i.d. random vectors uniformly distributed on K, then, with overwhelming probability, the symmetric convex hull of these points is an m-centrally-neighborly polytope with m ∼ n/ log 2 (cN/n).AMS Classification: primary 52A20, 94A12, 52B12, 46B09 secondary 15A52, 41A45, 94B75
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.