The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis. Recently, it has been shown that significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption that the unknown vector lies near the range of a suitably-chosen generative model. In particular, in (Bora et al., 2017) it was shown that roughly O(k log L) random Gaussian measurements suffice for accurate recovery when the k-input generative model is bounded and L-Lipschitz, and that O(kd log w) measurements suffice for k-input ReLU networks with depth d and width w. In this paper, we establish corresponding algorithm-independent lower bounds on the sample complexity using tools from minimax statistical analysis. In accordance with the above upper bounds, our results are summarized as follows: (i) We construct an L-Lipschitz generative model capable of generating group-sparse signals, and show that the resulting necessary number of measurements is Ω(k log L); (ii) Using similar ideas, we construct two-layer ReLU networks of high width requiring Ω(k log w) measurements, as well as lower-width deep ReLU networks requiring Ω(kd) measurements. As a result, we establish that the scaling laws derived in (Bora et al., 2017) are optimal or near-optimal in the absence of further assumptions.
I. INTRODUCTIONOver the past 1-2 decades, tremendous research effort has been placed on theoretical and algorithmic studies of high-dimensional linear inverse problems [1], [2]. The prevailing approach has been to model low-dimensional structure via assumptions such as sparsity or low rankness, and numerous algorithmic approaches have been shown to be successful, including convex relaxations [3], [4], greedy methods [5], [6], and more. The problem of sparse estimation via linear measurements (commonly referred to as compressive sensing) is particularly well-understood, with theoretical developments including sharp performance bounds for both practical algorithms [4], [6]-[8] and (potentially intractable) information-theoretically optimal algorithms [9]-[12]. Following the tremendous success of deep generative models in a variety of applications [13], a new perspective on compressive sensing was recently introduced, in which the sparsity assumption is replaced by the assumption of the underlying signal being well-modeled by a generative model (typically corresponding to a deep neural network) [14]. This approach was seen to exhibit impressive performance in experiments, with reductions in the number of measurements by large factors such as 5 to 10 compared to sparsity-based methods. 2 In addition, [14] provided theoretical guarantees on their proposed algorithm, essentially showing that an L-Lipschitz generative model with bounded k-dimensional inputs leads to reliable recovery with m = O(k log L)random Gaussian measurements (see Section II for a precise statement). Moreover, for a ReLU network generative model from R k to R n with width w and depth d...