Under certain conditions (known as the Restricted Isometry Property or RIP) on the m × Nmatrix Φ (where m < N ), vectors x ∈ R N that are sparse (i.e. have most of their entries equal to zero) can be recovered exactly from y := Φx even though Φ −1 (y) is typically an (N − m)-dimensional hyperplane; in addition x is then equal to the element in Φ −1 (y) of minimal ℓ 1 -norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an Iteratively Re-weighted Least Squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ −1 (y) with smallest ℓ 2 (w)-norm. If x (n) is the solution at iteration step n,, i = 1, . . . , N , for a decreasing sequence of adaptively defined ǫ n ; this updated weight is then used to obtain x (n+1) and the process is repeated. We prove that when Φ satisfies the RIP conditions, the sequence xconverges for all y, regardless of whether Φ −1 (y) contains a sparse vector. If there is a sparse vector in Φ −1 (y), then the limit is this sparse vector, and when x (n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the "heavier" weight, i = 1, . . . , N , where 0 < τ < 1, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for τ approaching to zero.
One-bit quantization is a method of representing band-limited signals by ±1 sequences that are computed from regularly spaced samples of these signals; as the sampling density λ → ∞, convolving these one-bit sequences with appropriately chosen filters produces increasingly close approximations of the original signals. This method is widely used for analog-to-digital and digital-to-analog conversion, because it is less expensive and simpler to implement than the more familiar critical sampling followed by fine-resolution quantization. However, unlike fine-resolution quantization, the accuracy of one-bit quantization is not well understood. A natural error lower bound that decreases like 2 −λ can easily be given using information-theoretic arguments. Yet, no one-bit quantization algorithm was known with an error decay estimate even close to exponential decay. In this paper we construct an infinite family of one-bit sigma-delta quantization schemes that achieves this goal. In particular, using this family, we prove that the error signal for π -band-limited signals is at most O(2 −.07λ ).
Quantization of compressed sensing measurements is typically justified by the robust recovery results of Candès, Romberg and Tao, and of Donoho. These results guarantee that if a uniform quantizer of step size δ is used to quantize m measurements y = Φx of a k-sparse signal x ∈ R N , where Φ satisfies the restricted isometry property, then the approximate recoveryThe simplest and commonly assumed approach is to quantize each measurement independently. In this paper, we show that if instead an rth order Σ∆ quantization scheme with the same output alphabet is used to quantize y, then there is an alternative recovery method via Sobolev dual frames which guarantees a reduction of the approximation error by a factor of (m/k) (r−1/2)α for any 0 < α < 1, if m r k(log N ) 1/(1−α) . The result holds with high probability on the initial draw of the measurement matrix Φ from the Gaussian distribution, and uniformly for all k-sparse signals x that satisfy a mild size condition on their supports.
Sigma-delta modulation is a popular method for analog-to-digital conversion of bandlimited signals that employs coarse quantization coupled with oversampling. The standard mathematical model for the error analysis of the method measures the performance of a given scheme by the rate at which the associated reconstruction error decays as a function of the oversampling ratio . It was recently shown that exponential accuracy of the form O.2 r / can be achieved by appropriate one-bit sigma-delta modulation schemes. By general informationentropy arguments, r must be less than 1. The current best-known value for r is approximately 0:088. The schemes that were designed to achieve this accuracy employ the "greedy" quantization rule coupled with feedback filters that fall into a class we call "minimally supported." In this paper, we study the discrete minimization problem that corresponds to optimizing the error decay rate for this class of feedback filters. We solve a relaxed version of this problem exactly and provide explicit asymptotics of the solutions. From these relaxed solutions, we find asymptotically optimal solutions of the original problem, which improve the best-known exponential error decay rate to r 0:102. Our method draws from the theory of orthogonal polynomials; in particular, it relates the optimal filters to the zero sets of Chebyshev polynomials of the second kind.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.