Abstract. In a typical inverse problem, a spatially distributed parameter in a physical model is estimated from measurements of model output. Since measurements are stochastic in nature, so is any parameter estimate. Moreover, in the Bayesian setting, the choice of regularization corresponds to the definition of the prior probability density function, which in turn is an uncertainty model for the unknown parameters. For both of these reasons, significant uncertainties exist in the solution of an inverse problem. Thus to fully understand the solution, quantifying these uncertainties is important. When the physical model is linear and the error model and prior are Gaussian, the posterior density function is Gaussian with a known mean and covariance matrix. However, the electrical impedance tomography inverse problem is nonlinear, and hence no closed form expression exists for the posterior density. The typical approach for such problems is to sample from the posterior and then use the samples to compute statistics (such as the mean and variance) of the unknown parameters. Sampling methods for electrical impedance tomography have been studied by various authors in the inverse problems community. However, up to this point the focus has been on the development of increasingly sophisticated implementations of the Gibbs sampler, whose samples are known to converge very slowly to the correct density for large-scale problems. In this paper, we implement a recently developed sampling method called randomize-then-optimize (RTO), which provides nearly independent samples for each application of an appropriate numerical optimization algorithm. The sample density for RTO is not the posterior density, but RTO can be used as a very effective proposal within a Metropolis-Hastings algorithm to obtain samples from the posterior. Here our focus is on implementing the method on synthetic examples from electrical impedance tomography, and we show that it is both computationally efficient and provides good results. We also compare RTO performance with the Metropolis adjusted Langevin algorithm and find RTO to be much more efficient. 1. Introduction. Solving an inverse problem requires estimating unknown parameters in a physical model from measurements of model output. The unknown parameter vector typically corresponds to a numerical discretization of a continuously defined, spatially distributed parameter in the physical model and hence is high-dimensional. This leads to an overparameterized statistical model, and hence estimates have large variance or, in other words, are unstable with respect to perturbations in the observations. Regularization methods [36], which have been studied for decades in the inverse problems literature, are the typical fix for such instability.Over the past ten years or so, an increasing number of researchers in inverse problems have drawn the connection between regularized solutions of inverse problems and Bayesian statistics [20]. Since the measured data in an inverse problem will always contain noise, a ...