We consider Markov Decision Processes (MDPs) where distributional parameters, such as transition probabilities, are unknown and estimated from data. The popular distributionally robust approach to addressing the parameter uncertainty can sometimes be overly conservative. In this paper, we propose a Bayesian risk approach to MDPs with parameter uncertainty, where a risk functional is applied in nested form to the expected discounted total cost with respect to the Bayesian posterior distributions of the unknown parameters in each time stage. The proposed approach provides more flexibility of risk attitudes towards parameter uncertainty and takes into account the availability of data in future time stages. For the finite-horizon MDPs, we show the dynamic programming equations can be solved efficiently with an upper confidence bound (UCB) based adaptive sampling algorithm. For the infinite-horizon MDPs, we propose a risk-adjusted Bellman operator and show the proposed operator is a contraction mapping that leads to the optimal value function to the Bayesian risk formulation. We demonstrate the empirical performance of our proposed algorithms in the finite-horizon case on an inventory control problem and a path planning problem.Preprint. Under review.