Estimating the predictive uncertainty of pretrained language models is important for increasing their trustworthiness in NLP. Although many previous works focus on quantifying prediction uncertainty, there is little work on explaining the uncertainty. This paper pushes a step further on explaining uncertain predictions of post-calibrated pre-trained language models. We adapt two perturbationbased post-hoc interpretation methods, Leaveone-out and Sampling Shapley, to identify words in inputs that cause the uncertainty in predictions. We test the proposed methods on BERT and RoBERTa with three tasks: sentiment classification, natural language inference, and paraphrase identification, in both indomain and out-of-domain settings. Experiments show that both methods consistently capture words in inputs that cause prediction uncertainty.