2022
DOI: 10.48550/arxiv.2201.03742
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs

Abstract: Estimating the predictive uncertainty of pretrained language models is important for increasing their trustworthiness in NLP. Although many previous works focus on quantifying prediction uncertainty, there is little work on explaining the uncertainty. This paper pushes a step further on explaining uncertain predictions of post-calibrated pre-trained language models. We adapt two perturbationbased post-hoc interpretation methods, Leaveone-out and Sampling Shapley, to identify words in inputs that cause the unce… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…Shen et al (2020) attempt to transfer the idea of prior networks (Malinin and Gales, 2018;Joo et al, 2020) onto recurrent neural networks. Another line of works investigates uncertainty properties themselves; For instance, Chen and Ji (2022) try to explain uncertainty estimates for BERT and RoBERTa. Another example is given by Xiao and Wang (2021), who use predictive uncertainty to explain hallucination in Language Generation.…”
Section: Introductionmentioning
confidence: 99%
“…Shen et al (2020) attempt to transfer the idea of prior networks (Malinin and Gales, 2018;Joo et al, 2020) onto recurrent neural networks. Another line of works investigates uncertainty properties themselves; For instance, Chen and Ji (2022) try to explain uncertainty estimates for BERT and RoBERTa. Another example is given by Xiao and Wang (2021), who use predictive uncertainty to explain hallucination in Language Generation.…”
Section: Introductionmentioning
confidence: 99%