2022
DOI: 10.48550/arxiv.2202.01875
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

Abstract: As practitioners increasingly deploy machine learning models in critical domains such as healthcare, finance, and policy, it becomes vital to ensure that domain experts function effectively alongside these models. Explainability is one way to bridge the gap between human decision-makers and machine learning models. However, most of the existing work on explainability focuses on one-off, static explanations like feature importances or rule-lists. These sorts of explanations may not be sufficient for many use ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
27
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(29 citation statements)
references
References 61 publications
2
27
0
Order By: Relevance
“…Note that throughout the LRP values are scaled by the maximum absolute LRP value for any variable across the ensemble. If the LRP value consistently has the same sign across the quantiles, then we can be confident of the effect this feature has on the output; the piece of information of most interest to practitioners in a recent survey in Lakkaraju et al (2022).…”
Section: Layer-wise Relevance Propagation (Lrp)mentioning
confidence: 96%
See 1 more Smart Citation
“…Note that throughout the LRP values are scaled by the maximum absolute LRP value for any variable across the ensemble. If the LRP value consistently has the same sign across the quantiles, then we can be confident of the effect this feature has on the output; the piece of information of most interest to practitioners in a recent survey in Lakkaraju et al (2022).…”
Section: Layer-wise Relevance Propagation (Lrp)mentioning
confidence: 96%
“…These XAI methods reveal the extent to which the BNN is fit for purpose for our problem. Moreover, our approach means we can gain a reliable notion of the confidence of the explanation, which has been highlighted as a key area where XAI techniques must improve (Lakkaraju et al, 2022). Applying our XAI techniques to BNNs trained on real-world ocean circulation data in an application designed to understand future climate has the added benefit that we are able to validate and confirm these novel applications of XAI using physical understanding of ocean circulation processes, improving confidence in our BNN predictions.…”
mentioning
confidence: 96%
“…Our proposed debate game uses models' latent knowledge to extract arguments and counterarguments, preserving the arguments' faithfulness to the given model. (Lakkaraju et al 2022) advocates the importance of thinking of explainability as a dialog rather than a fixed attention map or feature attribution. We provide a practical framework in this direction.…”
Section: Related Workmentioning
confidence: 99%
“…All the methods described so far fall under the category of post-hoc explanations, where the trained model is diagnosed to extract reasons for making a particular decision. Instead, methods such as (Stammer, Schramowski, and Kersting 2021;Lampinen et al 2021;Irving, Christiano, and Amodei 2018) aim to develop an intrinsically transparent and aligned model. We borrow some ideas from (Irving, Christiano, and Amodei 2018) in this work to develop our post-hoc explainability framework.…”
Section: Introductionmentioning
confidence: 99%
“…What's the next big thing in XAI? According to Lakkaraju et al [277], dialogue could be the most important part of explanation. Create algorithms that not only map out feature importance, but make sure they are capable of providing feedback in natural language.…”
Section: Discussionmentioning
confidence: 99%