Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity-undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsityundersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity-undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism. combinatorial geometry | phase transitions | linear programming | iterative thresholding algorithms | state evolution C ompressed sensing refers to a growing body of techniques that "undersample" high-dimensional signals and yet recover them accurately (1). Such techniques make fewer measurements than traditional sampling theory demands: rather than sampling proportional to frequency bandwidth, they make only as many measurements as the underlying "information content" of those signals. However, compared with traditional sampling theory, which can recover signals by applying simple linear reconstruction formulas, the task of signal recovery from reduced measurements requires nonlinear and, so far, relatively expensive reconstruction schemes. One popular class of reconstruction schemes uses linear programming (LP) methods; there is an elegant theory for such schemes promising large improvements over ordinary sampling rules in recovering sparse signals. However, solving the required LPs is substantially more expensive in applications than the linear reconstruction schemes that are now standard. In certain imaging problems, the signal to be acquired may be an image with 10 6 pixels and the required LP would involve tens of thousands of constraints and millions of variables. Despite advances in the speed of LP, such problems are still dramatically more expensive to solve than we would like.Here, we develop an iterative algorithm achieving reconstruction performance in one important sense identical to LP-based reconstruction while running dramatically faster. We assume that a vector y of n measurements is obtained from an unknown Nvector x 0 according to y = Ax 0 , where A is the n ...
PART 1 BACKGROUND 1 Introduction to information theory 3 1.1 Random variables 3 1.2 Entropy 5 1.3 Sequences of random variables and their entropy rate 8 1.4 Correlated variables and mutual information 1.5 Data compression 1.6 Data transmission Notes 2 Statistical physics and probability theory 2.1 The Boltzmann distribution 2.2 Thermodynamic potentials 2.3 The fluctuation-dissipation relations 2.4 The thermodynamic limit 2.5 Ferromagnets and Ising models 2.6 The Ising spin glass Notes
Approximate message passing' algorithms have proved to be effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper we provide rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed gaussian entries.While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs.The proof technique is fundamentally different from the standard approach to density evolution, in that it copes with a large number of short cycles in the underlying factor graph. It relies instead on a conditioning technique recently developed by Erwin Bolthausen in the context of spin glass theory.for an appropriate sequence of non-linear functions {η t } t≥0 . (Here by convention any variable with negative index is assumed to be 0.) The algorithm succeeds if x t converges to a good approximation of x 0 (cf.[DMM09] for details). Throughout this paper, the matrix A is normalized in such a way that its columns have ℓ 2 norm 1 concentrated around 1. Given a vector x ∈ R N and a scalar function f : R → R, we write f (x) for
Let M be an nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E| = O(r n) observed entries with relative root mean square errorFurther, if r = O(1) and M is sufficiently unstructured, then it can be reconstructed exactly from |E| = O(n log n) entries. This settles (in the case of bounded rank) a question left open by Candès and Recht and improves over the guarantees for their reconstruction algorithm. The complexity of our algorithm is O(|E|r log n), which opens the way to its use for massive data sets. In the process of proving these statements, we obtain a generalization of a celebrated result by Friedman-Kahn-Szemerédi and Feige-Ofek on the spectrum of sparse random matrices.
We consider the problem of fitting the parameters of a high-dimensional linear regression model. In the regime where the number of parameters p is comparable to or exceeds the sample size n, a successful approach uses an 1 -penalized least squares estimator, known as Lasso.Unfortunately, unlike for linear estimators (e.g., ordinary least squares), no well-established method exists to compute confidence intervals or p-values on the basis of the Lasso estimator. Very recently, a line of work [JM13b, JM13a, vdGBR13] has addressed this problem by constructing a debiased version of the Lasso estimator. In this paper, we study this approach for random design model, under the assumption that a good estimator exists for the precision matrix of the design. Our analysis improves over the state of the art in that it establishes nearly optimal average testing power if the sample size n asymptotically dominates s 0 (log p) 2 , with s 0 being the sparsity level (number of non-zero coefficients). Earlier work obtains provable guarantees only for much larger sample size, namely it requires n to asymptotically dominate (s 0 log p) 2 .In particular, for random designs with a sparse precision matrix we show that an estimator thereof having the required properties can be computed efficiently. Finally, we evaluate this approach on synthetic data and compare it with earlier proposals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.