2022
DOI: 10.1007/s11023-022-09598-7
|View full text |Cite
|
Sign up to set email alerts
|

Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice

Abstract: Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. In this article, an expanded version of a paper originally presented at the 37th Conference on Uncertainty in Artificial Intelligence (Watson et al., 2021), we attempt to fill this gap. Buildin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0
1

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 64 publications
0
12
0
1
Order By: Relevance
“…As observed by Watson, Gultchin et al [57], necessity and sufficiency are the building blocks of any successful explanation. Necessary and sufficient explanations are complementary to each other: given a source entity, a relation and the predicted target entity:…”
Section: Explaining Link Predictionsmentioning
confidence: 87%
See 1 more Smart Citation
“…As observed by Watson, Gultchin et al [57], necessity and sufficiency are the building blocks of any successful explanation. Necessary and sufficient explanations are complementary to each other: given a source entity, a relation and the predicted target entity:…”
Section: Explaining Link Predictionsmentioning
confidence: 87%
“…Recently, LIME seems to have been outclassed by frameworks that formulate the relevance of input features in terms of Shapley values [46]. As observed by Watson, Gultchin et al [57] these methods, such as SHAP [30], have gained popularity due to their solid theoretical backing derived from Game Theory. We provide a detailed comparison between SHAP and our approach in Section 4.3.…”
Section: General Purpose Frameworkmentioning
confidence: 99%
“…SHAP is preferred over LIME because: (1) SHAP has been well formulated for tree-based interpretations, (2) improvements in mathematical guarantees over LIME have been well documented, (3) SHAP provides global feature importances, and (4) reports salient interactions. However, this does not necessarily indicate SHAP as the “go to” feature importance technique; discussion of display outputs from other feature importance approaches (e.g., counterfactual explanations; DiCE, Anchor) in collaboration with the clinical domain expert is important when making a final selection ( 58 62 ). While the ultimate utility in using SHAP lies in the ability to fit explanatory models for each individual in the case that machine learning approaches dominate, SHAP, in any model application, can generate instance-wise importance values for useful, patient-specific readouts for the clinician.…”
Section: Discussionmentioning
confidence: 99%
“…Galhotra et al (2021) suggested an approach to capture the notions of necessity and sufficiency from probabilistic causal models (Pearl, 2009). Watson et al (2021) presented a different method for quantifying necessity and sufficiency over subsets of features. We follow the framework of probabilistic causal models, and adopt the definitions from Galhotra et al (2021).…”
Section: Background and Related Workmentioning
confidence: 99%