Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper)• It gives near-optimal error guarantees. • It is robust to observation noise.• It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint.• It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal.• Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
ABSTRACT.Sparse signal expansions represent or approximate a signal using a small number of elements from a large collection of elementary waveforms. Finding the optimal sparse expansion is known to be NP hard in general and non-optimal strategies such as Matching Pursuit, Orthogonal Matching Pursuit, Basis Pursuit and Basis Pursuit De-noising are often called upon. These methods show good performance in practical situations, however, they do not operate on the ℓ0 penalised cost functions that are often at the heart of the problem. In this paper we study two iterative algorithms that are minimising the cost functions of interest. Furthermore, each iteration of these strategies has computational complexity similar to a Matching Pursuit iteration, making the methods applicable to many real world problems. However, the optimisation problem is non-convex and the strategies are only guaranteed to find local solutions, so good initialisation becomes paramount. We here study two approaches. The first approach uses the proposed algorithms to refine the solutions found with other methods, replacing the typically used conjugate gradient solver. The second strategy adapts the algorithms and we show on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching Pursuit, while retaining the computational complexity of the Matching Pursuit algorithm.
Abstract-Sparse signal models are used in many signal processing applications. The task of estimating the sparsest coefficient vector in these models is a combinatorial problem and efficient, often sub-optimal strategies have to be used. Fortunately, under certain conditions on the model, several algorithms could be shown to efficiently calculate near optimal solutions. In this paper, we study one of these methods, the so called Iterative Hard Thresholding algorithm. We are here interested in the application of this method to real world problems, in which it is not known in general, whether the conditions used in the performance guarantees are satisfied or not. We suggest a simple modification to the algorithm that guarantees the convergence of the method, even in a regime in which the theoretical condition is not satisfied. With this modification, empirical evidence suggests that the algorithm is faster than many other state of the art approaches whilst showing similar performance. What is more, the modified algorithm retains theoretical performance guarantees similar to the original algorithm.
Compressed sensing is an emerging signal acquisition technique that enables signals to be sampled well below the Nyquist rate, given that the signal has a sparse representation in an orthonormal basis. In fact, sparsity in an orthonormal basis is only one possible signal model that allows for sampling strategies below the Nyquist rate. In this paper we consider a more general signal model and assume signals that live on or close to the union of linear subspaces of low dimension. We present sampling theorems for this model that are in the same spirit as the Nyquist-Shannon sampling theorem in that they connect the number of required samples to certain model parameters.Contrary to the Nyquist-Shannon sampling theorem, which gives a necessary and sufficient condition for the number of required samples as well as a simple linear algorithm for signal reconstruction, the model studied here is more complex. We therefore concentrate on two aspects of the signal model, the existence of one to one maps to lower dimensional observation spaces and the smoothness of the inverse map. We show that almost all linear maps are one to one when the observation space is at least of the same dimension as the largest dimension of the convex hull of the union of any two subspaces in the model. However, we also show that in order for the inverse map to have certain smoothness properties such as a given finite Lipschitz constant, the required observation dimension necessarily depends logarithmically This is a corrected version of the papers in which a few small errors have been corrected. Importantly, the dependence on δ in Theorem 3.3 and its corollaries has been corrected. VERSION: DECEMBER 3, 2009 1 on the number of subspaces in the signal model. In other words, whilst unique linear sampling schemes require a small number of samples depending only on the dimension of the subspaces involved, in order to have stable sampling methods, the number of samples depends necessarily logarithmically on the number of subspaces in the model. These results are then applied to two examples, the standard compressed sensing signal model in which the signal has a sparse representation in an orthonormal basis and to a sparse signal model with additional tree structure.
We propose a novel computational strategy to partition the cerebral cortex into disjoint, spatially contiguous and functionally homogeneous parcels. The approach exploits spatial dependency in the fluctuations observed with functional Magnetic Resonance Imaging (fMRI) during rest. Single subject parcellations are derived in a two stage procedure in which a set of (~1000 to 5000) stable seeds is grown into an initial detailed parcellation. This parcellation is then further clustered using a hierarchical approach that enforces spatial contiguity of the parcels. A major challenge is the objective evaluation and comparison of different parcellation strategies; here, we use a range of different measures. Our single subject approach allows a subject-specific parcellation of the cortex, which shows high scan-to-scan reproducibility and whose borders delineate clear changes in functional connectivity. Another important measure, on which our approach performs well, is the overlap of parcels with task fMRI derived clusters. Connectivity-derived parcellation borders are less well matched to borders derived from cortical myelination and from cytoarchitectonic atlases, but this may reflect inherent differences in the data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.