2022
DOI: 10.48550/arxiv.2202.04092
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Machine Explanations and Human Understanding

Abstract: Explanations are hypothesized to improve human understanding of machine learning models and achieve a variety of desirable outcomes, ranging from model debugging to enhancing human decision making. However, empirical studies have found mixed and even negative results. An open question, therefore, is under what conditions explanations can improve human understanding and in what way. Using adapted causal diagrams, we provide a formal characterization of the interplay between machine explanations and human unders… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 33 publications
1
1
0
Order By: Relevance
“…This provides a possible path for better signaling groundtruth labels to help decision-makers avoid over-reliance and make better decisions. This path resonates with a recent theoretical work by Chen et al [19], which suggests that feature-based explanations can only reveal model decision boundaries (how the model makes decisions), and it is by their contrast with human intuitions about the task boundaries (which features should contribute to the outcome) can one detect model errors. We may in fact view the gray-out words as such contrasts.…”
Section: Discussionsupporting
confidence: 84%
“…This provides a possible path for better signaling groundtruth labels to help decision-makers avoid over-reliance and make better decisions. This path resonates with a recent theoretical work by Chen et al [19], which suggests that feature-based explanations can only reveal model decision boundaries (how the model makes decisions), and it is by their contrast with human intuitions about the task boundaries (which features should contribute to the outcome) can one detect model errors. We may in fact view the gray-out words as such contrasts.…”
Section: Discussionsupporting
confidence: 84%
“…Indeed, it is essential to provide accurate and understandable explanations as poor explanations can sometimes be even worse than no explanation at all [80] and may also generate undesired bias in the users [81,82]. As a consequence, properly structuring [83] and evaluating the interpretability and effectiveness of explanations requires a deep understanding of the ways in which humans interpret and understand them, while also accounting for the relationship between human understanding and model explanations [84,85]. For such reasons, the explainable AI research field spreads from IT-related fields, such as computer science and machine learning, to a variety of humancentred disciplines, such as psychology, philosophy, and decision making [86].…”
Section: Understanding the Human's Perspective In Explainable Aimentioning
confidence: 99%