We study the question of reconstructing two signals f and g from their convolution y = f * g. This problem, known as blind deconvolution, pervades many areas of science and technology, including astronomy, medical imaging, optics, and wireless communications. A key challenge of this intricate non-convex optimization problem is that it might exhibit many local minima. We present an efficient numerical algorithm that is guaranteed to recover the exact solution, when the number of measurements is (up to log-factors) slightly larger than the information-theoretical minimum, and under reasonable conditions on f and g. The proposed regularized gradient descent algorithm converges at a geometric rate and is provably robust in the presence of noise. To the best of our knowledge, our algorithm is the first blind deconvolution algorithm that is numerically efficient, robust against noise, and comes with rigorous recovery guarantees under certain subspace conditions. Moreover, numerical experiments do not only provide empirical verification of our theory, but they also demonstrate that our method yields excellent performance even in situations beyond our theoretical framework.
The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where both x and the diagonal matrix D (which models the calibration error) are unknown. By "lifting" this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis.
Suppose that we have r sensors and each one intends to send a function g i (e.g. a signal or an image) to a receiver common to all r sensors. During transmission, each g i gets convolved with a function f i . The receiver records the function y, given by the sum of all these convolved signals. When and under which conditions is it possible to recover the individual signals g i and the blurring functions f i from just one received signal y? This challenging problem, which intertwines blind deconvolution with blind demixing, appears in a variety of applications, such as audio processing, image processing, neuroscience, spectroscopy, and astronomy. It is also expected to play a central role in connection with the future Internet-of-Things. We will prove that under reasonable and practical assumptions, it is possible to solve this otherwise highly ill-posed problem and recover the r transmitted functions g i and the impulse responses f i in a robust, reliable, and efficient manner from just one single received function y by solving a semidefinite program. We derive explicit bounds on the number of measurements needed for successful recovery and prove that our method is robust in the presence of noise. Our theory is actually sub-optimal, since numerical experiments demonstrate that, quite remarkably, recovery is still possible if the number of measurements is close to the number of degrees of freedom.
We study the question of extracting a sequence of functions {f i , g i } s i=1 from observing only the sum of their convolutions, i.e., from y = s i=1 f i * g i . While convex optimization techniques are able to solve this joint blind deconvolution-demixing problem provably and robustly under certain conditions, for medium-size or large-size problems we need computationally faster methods without sacrificing the benefits of mathematical rigor that come with convex methods. In this paper we present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. Our two-step algorithm converges to the global minimum linearly and is also robust in the presence of additive noise. While the derived performance bounds are suboptimal in terms of the informationtheoretic limit, numerical simulations show remarkable performance even if the number of measurements is close to the number of degrees of freedom. We discuss an application of the proposed framework in wireless communications in connection with the Internet-of-Things. * The authors acknowledge support from the NSF via grants dtra-dms 1322393 and DMS 1620455.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.