This paper examines fundamental error characteristics for a general class of matrix completion problems, where the matrix of interest is a product of two a priori unknown matrices, one of which is sparse, and the observations are noisy. Our main contributions come in the form of minimax lower bounds for the expected per-element squared error for this problem under several common noise models. Specifically, we analyze scenarios where the corruptions are characterized by additive Gaussian noise or additive heavier-tailed (Laplace) noise, Poisson-distributed observations, and highly-quantized (e.g., one-bit) observations, as instances of our general result. Our results establish that the error bounds derived in (Soni et al., 2016) for complexity-regularized maximum likelihood estimators achieve, up to multiplicative constants and logarithmic factors, the minimax error rates in each of these noise scenarios, provided that the nominal number of observations is large enough, and the sparse factor has (on an average) at least one non-zero per column. Index TermsMatrix completion, dictionary learning, minimax lower bounds I. INTRODUCTIONThe matrix completion problem involves imputing the missing values of a matrix from an incomplete, and possibly noisy sampling of its entries. In general, without making any assumption about the entries of the matrix, the matrix completion problem is ill-posed and it is impossible to recover the matrix uniquely. However, if the matrix to be recovered has some intrinsic structure (e.g., low rank structure), it is possible to design algorithms that exactly estimate the missing entries. Indeed, the performance low-rank matrix completion methods have been extensively studied in noiseless settings [1]-[5], in noisy settings where the observations are affected by additive noise [6]-[12], and in settings where the observations are non-linear (e.g., highly-quantized or Poisson distributed observation) functions of the underlying matrix entry (see, [13]-[15]). Recent works which explore robust recovery of low-rank matrices under malicious sparse corruptions include [16]-[19].A notable advantage of using low-rank models is that the estimation strategies involved in completing such matrices can be cast into efficient convex methods which are well-understood and suitable to analyses. The
Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions. Most of the existing works implicitly assume that the clean samples from the target distribution are easily available. However, in many applications, this assumption is violated. In this paper, we consider the observation setting when the samples from target distribution are given by the superposition of two structured components and leverage GANs for learning the structure of the components. We propose two novel frameworks: denoising-GAN and demixing-GAN. The denoising-GAN assumes access to clean samples from the second component and try to learn the other distribution, whereas demixing-GAN learns the distribution of the components at the same time. Through extensive numerical experiments, we demonstrate that proposed frameworks can generate clean samples from unknown distributions, and provide competitive performance in tasks such as denoising, demixing, and compressive sensing. 1 arXiv:1902.04664v1 [stat.ML] 12 Feb 2019 1.1 Setup and Our Technique Motivated by the recent success of generative models in high dimensional statistical inference tasks such as compressed sensing in [Bora et al., 2017, Bora et al., 2018, in this paper, we focus on Generative Adversarial Network (GAN) based generative models to implicitly learn the distributions, i.e., generate samples from their distributions. Most of the existing works on GANs typically assume access to clean samples from the underlying signal distribution. However, this assumption clearly breaks down in the superposition model considered in our setup, where the structured superposition makes training generative models very challenging.In this context, we investigate the first question with varying degrees of assumption about the access to clean samples from the two signal sources. We first focus on the setting when we have access to samples only from the constituent signal class N and observations, Y i 's. In this regard, we propose the denoising-GAN framework. However, assuming access to samples from one of the constituent signal class can be restrictive and is often not feasible in real-world applications. Hence, we further relax this assumption and consider the more challenging demixing problem, where samples from the second constituent component are not available and solve it using what we call the demixing-GAN framework.Finally, to answer the second question, we use our trained generator(s) from the proposed GAN frameworks for denoising and demixing tasks on unseen test samples (i.e., samples not used in the training process) by discovering the best hidden representation of the constituent components from the generative models. In addition to the denoising and demixing problems, we also consider a compressive sensing setting to test the trained generator(s). Below we explicitly list the contribution made in this paper:
We use plenoptic measurements of visible, infrared, and THz radiation to locate and image objects that are hidden from direct view by detecting their passive radiation scattered from rough surfaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.