We consider the problem of computing the k-sparse approximation to the discrete Fourier transform of an ndimensional signal. We show:• An O(k log n)-time randomized algorithm for the case where the input signal has at most k non-zero Fourier coefficients, and• An O(k log n log(n/k))-time randomized algorithm for general input signals.Both algorithms achieve o(n log n) time, and thus improve over the Fast Fourier Transform, for any k = o(n).They are the first known algorithms that satisfy this property. Also, if one assumes that the Fast Fourier Transform is optimal, the algorithm for the exactly k-sparse case is optimal for any k = n Ω(1) . We complement our algorithmic results by showing that any algorithm for computing the sparse Fourier transform of a general signal must use at least Ω(k log(n/k)/ log log n) signal samples, even if it is allowed to perform adaptive sampling.
We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory.We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev filters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating "large" coefficients, subtracting them and recursing on the reminder, it identifies and estimates the k largest coefficients in "one shot", in a manner akin to sketching/streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.1 The algorithm in this paper has a somewhat stronger guarantee; see "Results" for more details.
The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : R k → R n . Our main theorem is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an 2/ 2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.
We consider the problem of identifying the parameters of an unknown mixture of two arbitrary d-dimensional gaussians from a sequence of independent random samples. Our main results are upper and lower bounds giving a computationally efficient moment-based estimator with an optimal convergence rate, thus resolving a problem introduced by Pearson (1894). Denoting by σ 2 the variance of the unknown mixture, we prove that Θ(σ 12 ) samples are necessary and sufficient to estimate each parameter up to constant additive error when d = 1. Our upper bound extends to arbitrary dimension d > 1 up to a (provably necessary) logarithmic loss in d using a novel-yet simple-dimensionality reduction technique. We further identify several interesting special cases where the sample complexity is notably smaller than our optimal worst-case bound. For instance, if the means of the two components are separated by Ω(σ) the sample complexity reduces to O(σ 2 ) and this is again optimal.Our results also apply to learning each component of the mixture up to small error in total variation distance, where our algorithm gives strong improvements in sample complexity over previous work.
We consider the following k-sparse recovery problem: design an m × n matrix A, such that for any signal x, given Ax we can efficiently recoverx satisfying x −x 1 ≤ C min k-sparse x x − x 1 . It is known that there exist matrices A with this property that have only O(k log(n/k)) rows.In this paper we show that this bound is tight. Our bound holds even for the more general randomized version of the problem, where A is a random variable, and the recovery algorithm is required to work for any fixed x with constant probability (over A).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.