2022
DOI: 10.3390/electronics11030396
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-Based Rejection in Machine Learning: Implications for Model Development and Interpretability

Abstract: Uncertainty is present in every single prediction of Machine Learning (ML) models. Uncertainty Quantification (UQ) is arguably relevant, in particular for safety-critical applications. Prior research focused on the development of methods to quantify uncertainty; however, less attention has been given to how to leverage the knowledge of uncertainty in the process of model development. This work focused on applying UQ into practice, closing the gap of its utility in the ML pipeline and giving insights into how U… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 40 publications
0
5
0
Order By: Relevance
“…For predictions with high uncertainty, the observations can be passed on to a human expert for a label. The goal of a classification with rejection system (Mena et al, 2021;Barandas et al, 2022) is to help decide when to stop rejecting the most uncertain observations. The system takes as input the per-observation uncertainty values and outputs three metrics to assist the decision maker in finding the optimal rejection threshold for the task at hand.…”
Section: Classification With Rejectionmentioning
confidence: 99%
See 1 more Smart Citation
“…For predictions with high uncertainty, the observations can be passed on to a human expert for a label. The goal of a classification with rejection system (Mena et al, 2021;Barandas et al, 2022) is to help decide when to stop rejecting the most uncertain observations. The system takes as input the per-observation uncertainty values and outputs three metrics to assist the decision maker in finding the optimal rejection threshold for the task at hand.…”
Section: Classification With Rejectionmentioning
confidence: 99%
“…* | x * , θ(t) ) varies greatly for different weights θ(t) . Intuitively, data uncertainty measures uncertainty in the softmax classification on individual samples; model uncertainty measures how much the samples deviate(Hüllermeier & Waegeman, 2021;Barandas et al, 2022).…”
mentioning
confidence: 99%
“…KNN and distance metrics for anomaly detection have been studied previously [18,5,26], and recently have seen use in OOD and adversarial detection [35,1,39]. Furthermore, the notion of rejecting classification has been studied in depth [14,22,4]. However, this line of work often presents a rejection at the end of inference and so functionally is not dissimilar to methods which calibrate their outputs.…”
Section: Related Workmentioning
confidence: 99%
“…Uncertainty quantification can be obtained by means of sampling or direct estimation recovering measures of uncertainty such as entropy, variance, mutual information, etc. Methods based on such approaches have been used for improving classification in activity recognition tasks [32][33][34], as well as eliminating predicted samples with high uncertainty [35]. In the context of multi-instance learning, uncertainty quantification has been used to improve instance level classifiers [36], and to aid active learning scheme to provide different levels of confidence about predicted samples for weak labelers (for the models under training) or strong ones (for the samples that had available labels) [37].…”
Section: Related Workmentioning
confidence: 99%