2023
DOI: 10.1007/s10472-023-09837-2
|View full text |Cite
|
Sign up to set email alerts
|

Investigating the impact of calibration on the quality of explanations

Abstract: Predictive models used in Decision Support Systems (DSS) are often requested to explain the reasoning to users. Explanations of instances consist of two parts; the predicted label with an associated certainty and a set of weights, one per feature, describing how each feature contributes to the prediction for the particular instance. In techniques like Local Interpretable Model-agnostic Explanations (LIME), the probability estimate from the underlying model is used as a measurement of certainty; consequently, t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…The major findings of the study argue that local explanations significantly improve user trust and AI accuracy even more than confidence score. The main study by Löfström et al [73] investigates the impacts of the calibration in the explanation quality by utilizing misleading explainers such as LIME. The study indicates that adding a layer of calibration will significantly enhance the accuracy and fidelity of explanations.…”
Section: Discussionmentioning
confidence: 99%
“…The major findings of the study argue that local explanations significantly improve user trust and AI accuracy even more than confidence score. The main study by Löfström et al [73] investigates the impacts of the calibration in the explanation quality by utilizing misleading explainers such as LIME. The study indicates that adding a layer of calibration will significantly enhance the accuracy and fidelity of explanations.…”
Section: Discussionmentioning
confidence: 99%
“…The implementation of both the regression and the probabilistic regression solutions is expanding the calibrated-explanations Python package [23] and relies on the ConformalPredictiveSystem from the crepes package [37]. By default, ConformalPredictiveSystem is used without normalization but DifficultyEstimator provided by crepes.extras is fully supported by calibrated-explanations, with normalization options corresponding to the list given at the end of Section 3.1.…”
Section: Methodsmentioning
confidence: 99%
“…Below is an introduction to CEC [23], which provides the foundation to this paper's contribution 1 . In the following descriptions, a factual explanation is composed of a calibrated prediction from the underlying model accompanied by an uncertainty interval and a collection of factual feature rules, each composed of a feature weight with an uncertainty interval and a factual condition, covering that feature's instance value.…”
Section: Calibrated Explanations For Classification (Cec)mentioning
confidence: 99%
See 2 more Smart Citations

Calibrated Explanations for Regression

Löfström,
Löfström,
Johansson
et al. 2023
Preprint
Self Cite