2023
DOI: 10.1007/s11023-023-09637-x
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI and Causal Understanding: Counterfactual Approaches Considered

Abstract: The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 42 publications
0
4
0
Order By: Relevance
“…She worries that her application was denied because of her race or her gender. Sam Baron (2023) recently proposed one way that the bank can provide an explanation for its AI's decisions. The bank can make a list of counterfactuals of the form "If Sally's income had been $50,000, then the AI would have accepted her application.…”
Section: Discussionmentioning
confidence: 99%
“…She worries that her application was denied because of her race or her gender. Sam Baron (2023) recently proposed one way that the bank can provide an explanation for its AI's decisions. The bank can make a list of counterfactuals of the form "If Sally's income had been $50,000, then the AI would have accepted her application.…”
Section: Discussionmentioning
confidence: 99%
“…We would like to know about at least some processes, mechanisms, constraints, or structural dependencies inside of the model, rather than merely probing the ML-model-as-black-box from the outside and post hoc. While XAI methods can yield information that seems plausible and physically meaningful, this information may be irrelevant with respect to how the model actually arrived at a given decision or prediction (Rudin 2019;Baron 2023). This, in turn, can undermine our trust in the model for future applications.…”
Section: Post-hoc Xai In Climate Science and Statistical Understandingmentioning
confidence: 99%
“…8 Moreover, there are many systems that demonstrate emergent behaviour where parthood is appealed to for at least a partial explanation of that emergence. For example, it is used to in the explanation of emergent characteristics of social behaviour (Abdou & Gilbert, 2009;Epstein, 2002;Schelling, 1971); economic behaviour (Chen & Yeh, 2002;Dosi & Roventini, 2019); behaviours of biological systems (Vicsek et al, 1995;Winfree, 1967); and artificial neural networks, such as machine learning algorithms like Deep Learning (Baron, 2023;Gupta & Jayannavar, 2021).…”
Section: Motivations For Mereological Models Of Spacetime Emergencementioning
confidence: 99%