When recovering a sparse signal from noisy compressive linear measurements,
the distribution of the signal's non-zero coefficients can have a profound
effect on recovery mean-squared error (MSE). If this distribution was apriori
known, then one could use computationally efficient approximate message passing
(AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though,
the distribution is unknown, motivating the use of robust algorithms like
LASSO---which is nearly minimax optimal---at the cost of significantly larger
MSE for non-least-favorable distributions. As an alternative, we propose an
empirical-Bayesian technique that simultaneously learns the signal distribution
while MMSE-recovering the signal---according to the learned
distribution---using AMP. In particular, we model the non-zero distribution as
a Gaussian mixture, and learn its parameters through expectation maximization,
using AMP to implement the expectation step. Numerical experiments on a wide
range of signal classes confirm the state-of-the-art performance of our
approach, in both reconstruction error and runtime, in the high-dimensional
regime, for most (but not all) sensing operators