Compressive sensing (CS) is an alternative to Shannon/Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K << N elements from an N-dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N/K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models - wavelet trees and block sparsity - into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M=O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.Comment: 20 pages, 10 figures. Typo corrected in grant number. To appear in IEEE Transactions on Information Theor
Compressive Sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our new model-based recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), stably recovers MRF-modeled signals using many fewer measurements and computations than the current state-of-the-art algorithms.
In recent works, both sparsity-based methods as well as learningbased methods have proven to be successful in solving several challenging linear inverse problems. However, sparsity priors for natural signals and images suffer from poor discriminative capability, while learning-based methods seldom provide concrete theoretical guarantees. In this work, we advocate the idea of replacing hand-crafted priors, such as sparsity, with a Generative Adversarial Network (GAN) to solve linear inverse problems such as compressive sensing. In particular, we propose a projected gradient descent (PGD) algorithm for effective use of GAN priors for linear inverse problems, and also provide theoretical guarantees on the rate of convergence of this algorithm. Moreover, we show empirically that our algorithm demonstrates superior performance over an existing method of leveraging GANs for compressive sensing.
We propose a novel framework for the deterministic construction of linear, near-isometric embeddings of a finite set of data points. Given a set of training points X ⊂ R N , we consider the secant set S(X ) that consists of all pairwise difference vectors of X , normalized to lie on the unit sphere. We formulate an affine rank minimization problem to construct a matrix Ψ that preserves the norms of all the vectors in S(X ) up to a distortion parameter δ. While affine rank minimization is NP-hard, we show that this problem can be relaxed to a convex formulation that can be solved using a tractable semidefinite program (SDP). In order to enable scalability of our proposed SDP to very large-scale problems, we adopt a twostage approach. First, in order to reduce compute time, we develop a novel algorithm based on the Alternating Direction Method of Multipliers (ADMM) that we call Nuclear norm minimization with Max-norm constraints (NuMax) to solve the SDP. Second, we develop a greedy, approximate version of NuMax based on the column generation method commonly used to solve large-scale linear programs.We demonstrate that our framework is useful for a number of signal processing applications via a range of experiments on large-scale synthetic and real datasets.
Compressive Sensing (CS) states that a sparse signal can be recovered from a small number of linear measurements, and that this recovery can be performed efficiently in polynomial time. The framework of model-based compressive sensing (model-CS) leverages additional structure in the signal and provides new recovery schemes that can reduce the number of measurements even further. This idea has led to measurement-efficient recovery schemes for a variety of signal models. However, for any given model, model-CS requires an algorithm that solves the model-projection problem: given a query signal, report the signal in the model that is closest to the query signal. Often, this optimization problem can be computationally very expensive. Moreover, an approximation algorithm is not sufficient to provably recover the signal. As a result, the model-projection problem poses a fundamental obstacle for extending model-CS to many interesting classes of models.In this paper, we introduce a new framework that we call approximation-tolerant modelbased compressive sensing. This framework includes a range of algorithms for sparse recovery that require only approximate solutions for the model-projection problem. In essence, our work removes the aforementioned obstacle to model-based compressive sensing, thereby extending model-CS to a much wider class of signal models. Interestingly, all our algorithms involve both the minimization and maximization variants of the model-projection problem.We instantiate our new framework for a new signal model that we call the Constrained Earth Mover Distance (CEMD) model. This model is particularly useful for signal ensembles where the positions of the nonzero coefficients do not change significantly as a function of spatial (or temporal) location. We develop novel approximation algorithms for both the maximization and the minimization versions of the model-projection problem via graph optimization techniques. Leveraging these algorithms and our framework results in a nearly sample-optimal sparse recovery scheme for the CEMD model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.