ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8683359
|View full text |Cite
|
Sign up to set email alerts
|

Towards Better Confidence Estimation for Neural Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 9 publications
0
15
0
Order By: Relevance
“…These probabilities can be already interpreted as a prediction of the data uncertainty. However, it is widely discussed that neural networks are often over-confident and the softmax output is often poorly calibrated, leading to inaccurate uncertainty estimates [95], [67], [44], [92]. Furthermore, the softmax output cannot be associated with model uncertainty.…”
Section: Uncertainty Estimationmentioning
confidence: 99%
“…These probabilities can be already interpreted as a prediction of the data uncertainty. However, it is widely discussed that neural networks are often over-confident and the softmax output is often poorly calibrated, leading to inaccurate uncertainty estimates [95], [67], [44], [92]. Furthermore, the softmax output cannot be associated with model uncertainty.…”
Section: Uncertainty Estimationmentioning
confidence: 99%
“…For simple classifiers, measuring softmax probabilities, or information theoretic metrics such as entropy, and mutual information [37] may suffice. For more complex networks such as those than operate on a SDC, softmax probabilities and entropy are known to be unreliable confidence estimators [52]. Moreover, white-box metrics require a transparent access to the network, and substantial domain-knowledge for the creation of nontrivial probabilistic models that approximate the network's uncertainty.…”
Section: Introductionmentioning
confidence: 99%
“…Trustworthiness of a simple DNN can be measured with softmax probabilities [13], or information theoretic metrics, such as entropy [14] and mutual information [15]. However, in complex DNNs with many layers and neurons, softmax probabilities and entropy are unreliable confidence estimators of the prediction [16], [17]. Even for abnormal samples, DNNs may still produce overconfident posterior probabilities.…”
Section: Introductionmentioning
confidence: 99%