2019
DOI: 10.1109/mis.2019.2957223
|View full text |Cite
|
Sign up to set email alerts
|

Factual and Counterfactual Explanations for Black Box Decision Making

Abstract: The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of AI in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method providing faithful explanations of the decision made by a black-box classifier on a specific instance. The proposed method first learns an interpretable, local classifier on a synthetic neighborhood of the instance under inves… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
132
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 233 publications
(177 citation statements)
references
References 10 publications
0
132
0
2
Order By: Relevance
“…XAI literature proposes two main classes of methods to identify the foil. The first class locally approximates the AI system with a simpler model from which a foil is extracted [20]. For example, a decision tree is used to approximate the AI system's outputs in the vicinity of the fact to derive a foil that lies close to the fact with respect to the decision tree's structure [21].…”
Section: Methods For the Generation Of Coherent Counterfactual Explanmentioning
confidence: 99%
“…XAI literature proposes two main classes of methods to identify the foil. The first class locally approximates the AI system with a simpler model from which a foil is extracted [20]. For example, a decision tree is used to approximate the AI system's outputs in the vicinity of the fact to derive a foil that lies close to the fact with respect to the decision tree's structure [21].…”
Section: Methods For the Generation Of Coherent Counterfactual Explanmentioning
confidence: 99%
“…xspells , falls into the category of local , model-agnostic methods which originated with [ 33 ] and extended along diverse directions by [ 12 ] and by [ 14 , 16 ]. Well known model-agnostic local explanation methods able to also work on textual data include lime , anchor and shap .…”
Section: Related Workmentioning
confidence: 99%
“…Rules can also be used to provide counter-factuals , namely alternative conditions, not met by x , that would determine a different answer by the black box [ 4 ]. In our approach, we will build on lore [ 14 ], a local explainer for tabular data that learns a decision tree from a given neighborhood Z of the instance to explain. Such a tree is a surrogate model of the black box, i.e., it is trained to reproduce the decisions of the black box.…”
Section: Setting the Stagementioning
confidence: 99%
See 2 more Smart Citations