Often the primary goal of fitting a regression model is prediction, but the majority of work in recent years focuses on inference tasks, such as estimation and feature selection. In this paper we adopt the familiar sparse, high-dimensional linear regression model but focus on the task of prediction. In particular, we consider a new empirical Bayes framework that uses the data to appropriately center the prior distribution for the non-zero regression coefficients, and we investigate the method's theoretical and numerical performance in the context of prediction. We show that, in certain settings, the asymptotic posterior concentration in metrics relevant to prediction quality is very fast, and we establish a Bernstein-von Mises theorem which ensures that the derived prediction intervals achieve the target coverage probability. Numerical results complement the asymptotic theory, showing that, in addition to having strong finite-sample performance in terms of prediction accuracy and uncertainty quantification, the computation time is considerably faster compared to existing Bayesian methods.An initial obstacle to achieving this aim is that the model above cannot be fit without some additional structure. As is common in the literature, we will assume a sparsity structure on the high-dimensional β vector. That is, we will assume that most of the entries in β are zero; this will be made more precise in the following sections. With this assumed structure, a plethora of methods are now available for estimating a sparse β, e.g., lasso (Tibshirani 1996), adaptive lasso (Zou 2006), SCAD (Fan and Li 2001), and others; moreover, software is available to carry out the relevant computations easily and efficiently. Given an estimator of β, it is conceptually straightforward to produce a point prediction of a new response. However, the regularization techniques employed by these methods cause the estimators to have non-regular distribution theory (e.g., Pötscher and Leeb 2009), so results on uncertainty quantification, i.e., coverage properties of prediction intervals, are few in number; but see Leeb (2006Leeb ( , 2009 and the references therein.On the Bayesian side, given a full probability model, it is conceptually straightforward to obtain a predictive distribution for the new response and suggest some form of uncertainty quantification, but there are still a number of challenges. First, in high-dimensional cases such as this, the choice of prior matters, so specifying prior distributions that lead to desirable operating characteristics of the posterior distribution, e.g., optimal posterior concentration rates, is non-trivial. Castillo et al. (2015) and others have demonstrated that in order to achieve the optimal concentration rates, the prior for the non-zero β coefficients must have sufficiently heavy tails, in particular, heavier than the conjugate Gaussian tails. This constraint leads to the second challenge, namely, computation of the posterior distribution. While general Markov chain Monte Carlo (MCMC) methods are avail...