2021
DOI: 10.48550/arxiv.2110.08030
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Identifying Incorrect Classifications with Balanced Uncertainty

Abstract: Uncertainty estimation is critical for costsensitive deep-learning applications (i.e. disease diagnosis).It is very challenging partly due to the inaccessibility of uncertainty groundtruth in most datasets. Previous works proposed to estimate the uncertainty from softmax calibration, Monte Carlo sampling, subjective logic and so on. However, these existing methods tend to be overconfident about their predictions with unreasonably low overall uncertainty, which originates from the imbalance between positive (co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Confidence Estimation. Being an important task that helps determine whether a deep predictor's predictions can be trusted, confidence estimation has been studied extensively across various computer vision tasks [14,11,19,5,34,36,32,44,4,35,26,43,47]. At the beginning, Hendrycks and Gimpel [14] proposed Maximum Class Probability utilizing the classifier softmax distribution, Gal and Ghahramani [11] proposed MCDropout from the perspective of uncertainty estimation, and Jiang et al [19] proposed Trust Score to calculate the agreement between the classifier and a modified nearest-neighbor classifier in the testing set.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Confidence Estimation. Being an important task that helps determine whether a deep predictor's predictions can be trusted, confidence estimation has been studied extensively across various computer vision tasks [14,11,19,5,34,36,32,44,4,35,26,43,47]. At the beginning, Hendrycks and Gimpel [14] proposed Maximum Class Probability utilizing the classifier softmax distribution, Gal and Ghahramani [11] proposed MCDropout from the perspective of uncertainty estimation, and Jiang et al [19] proposed Trust Score to calculate the agreement between the classifier and a modified nearest-neighbor classifier in the testing set.…”
Section: Related Workmentioning
confidence: 99%
“…correct and incorrect task model predictions from each other. Afterwards, Li et al [26] proposed an extension to True Class Probability [5] that uses a Distributional Focal Loss to focus more on predictions with higher uncertainty. Unlike previous methods that design strategies to handle a specific imbalanced distribution of correct and incorrect labels, we adopt a novel perspective, and tackle the label imbalance problem through meta-learning, which allows our confidence estimator to learn distribution-generalizable knowledge to tackle a variety of diverse label distributions.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations