Principal component regression uses principal components as regressors. It is particularly useful in prediction settings with high-dimensional covariates. The existing literature treating of Bayesian approaches is relatively sparse. We introduce a Bayesian approach that is robust to outliers in both the dependent variable and the covariates. Outliers can be thought of as observations that are not in line with the general trend. The proposed approach automatically penalises these observations so that their impact on the posterior gradually vanishes as they move further and further away from the general trend, leading to whole robustness. The predictions produced are thus consistent with the bulk of the data. The approach also exploits the geometry of principal components to efficiently identify those that are significant. Individual predictions obtained from the resulting models are consolidated according to model-averaging mechanisms to account for model uncertainty. The approach is evaluated on real data and compared to its nonrobust Bayesian counterpart, the traditional frequentist approach, and a commonly employed robust frequentist method. Detailed guidelines to automate the entire statistical procedure are provided. All required code is made available, see ArXiv:1711.06341.ther summarised using two different linear regression models that respectively yield a robust regression line (in green) and an ordinary least squares regression line (in red). Given that different summaries lead to different data points and therefore different regressions, one might however wonder whether the available or natural summaries are necessarily suitable for the tasks at hand.Principal component regression (PCR) is the name given to a linear regression model using principal components (PCs) as regressors. It is based on a principal component analysis (PCA), which is commonly used to summarise the information contained in covariates. The principle is to find new axes in the covariate space by exploiting the correlation structure between the covariates, and then encode the covariate observations in that new coordinate system. The resulting variables, called principal components (PCs), are linearly independent and have the remarkable property that the first q PCs retain the maximum amount of information carried by the original observations (compared to any other q-dimensional summary). Regrouping correlated variables to produce linearly independent ones is appealing in a linear regression context, as strongly correlated variables are known to carry redundant information, leading to unstable estimations. Companies within the same economic sector in stock market indices like S&P 500 and S&P/TSX are an example of such correlated variables. Linear independence also allows visualising the relationship between the dependent variable and the PCs by plotting the dependent variable against each of the PCs.Due to the loss in the interpretability of the inference results engendered by transforming covariates, PCR is mainly used in a predictio...