A greedy algorithm for the construction of a reduced model with reduction in both parameter and state is developed for an efficient solution of statistical inverse problems governed by partial differential equations with distributed parameters. Large-scale models are too costly to evaluate repeatedly, as is required in the statistical setting. Furthermore, these models often have high-dimensional parametric input spaces, which compounds the difficulty of effectively exploring the uncertainty space. We simultaneously address both challenges by constructing a projectionbased reduced model that accepts low-dimensional parameter inputs and whose model evaluations are inexpensive. The associated parameter and state bases are obtained through a greedy procedure that targets the governing equations, model outputs, and prior information. The methodology and results are presented for groundwater inverse problems in one and two dimensions.1. Introduction. Statistical inverse problems governed by partial differential equations (PDEs) with spatially distributed parameters pose a significant computational challenge for existing methods. While the cost of a repeated PDE solutions can be addressed by traditional model reduction techniques, the difficulty in sampling in high-dimensional parameter spaces remains. We present a model reduction algorithm that seeks low-dimensional representations of parameters and states while maintaining fidelity in outputs of interest. The resulting reduced model accelerates model evaluations and facilitates efficient sampling in the reduced parameter space. The result is a tractable procedure for the solution of statistical inverse problems involving PDEs with high-dimensional parametric input spaces.Given a parameterized mathematical model of a certain phenomenon, the forward problem is to compute output quantities of interest for specified parameter inputs. In many cases, the parameters are uncertain, but they can be inferred from observations by solving an inverse problem. Inference is often performed by solving an optimization problem to minimize the disparity between model-predicted outputs and observations. Many inverse problems of this form are ill-posed in the sense that there may be many values of the parameters whose model-predicted outputs reproduce the observations. The set of parameters consistent with the observations may be larger still if we also admit noise in the sensor instruments. In the deterministic setting, a regularization term is often included in the objective function to make the problem well-posed. The *