2022
DOI: 10.1609/aaai.v36i5.20507
|View full text |Cite
|
Sign up to set email alerts
|

Sufficient Reasons for Classifier Decisions in the Presence of Domain Constraints

Abstract: Recent work has unveiled a theory for reasoning about the decisions made by binary classifiers: a classifier describes a Boolean function, and the reasons behind an instance being classified as positive are the prime-implicants of the function that are satisfied by the instance. One drawback of these works is that they do not explicitly treat scenarios where the underlying data is known to be constrained, e.g., certain combinations of features may not exist, may not be observable, or may be required to be disr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(29 citation statements)
references
References 11 publications
0
28
0
1
Order By: Relevance
“…One additional solution for computing an AXp is to formulate the problem as finding a minimal correction subset (MCS) of a propositional Horn formula, and then exploiting existing efficient algorithms (Arif et al, 2015;Marques-Silva et al, 2016). Besides enabling efficient implementations, the Horn encoding allows for integrating constraints that restrict the feature space by disallowing points in feature space that violate those constraints (Gorji and Rubin, 2021). As long as the added constraints are also Horn, and this is the case with propositional rules, then the complexity of reasoning is unaffected.…”
Section: Abductive Path Explanations By Propositional Horn Encodingmentioning
confidence: 99%
See 1 more Smart Citation
“…One additional solution for computing an AXp is to formulate the problem as finding a minimal correction subset (MCS) of a propositional Horn formula, and then exploiting existing efficient algorithms (Arif et al, 2015;Marques-Silva et al, 2016). Besides enabling efficient implementations, the Horn encoding allows for integrating constraints that restrict the feature space by disallowing points in feature space that violate those constraints (Gorji and Rubin, 2021). As long as the added constraints are also Horn, and this is the case with propositional rules, then the complexity of reasoning is unaffected.…”
Section: Abductive Path Explanations By Propositional Horn Encodingmentioning
confidence: 99%
“…9. A sample of references on formal explainability includes(Shih et al, 2018;Ignatiev et al, 2019a;Shih et al, 2019;Ignatiev et al, 2019b;Narodytska et al, 2019;Wolf et al, 2019;Audemard et al, 2020;Darwiche, 2020;Darwiche and Hirth, 2020;Shi et al, 2020;Rago et al, 2020;Boumazouza et al, 2020;Ignatiev et al, 2020b;Izza et al, 2020;Marques-Silva et al, 2021;Malfa et al, 2021;Huang et al, 2021b;Audemard et al, 2021;Asher et al, 2021;Cooper and Marques-Silva, 2021;Boumazouza et al, 2021;Huang et al, 2021a;Rago et al, 2021;Liu and Lorini, 2021;Wäldchen et al, 2021;Darwiche and Marquis, 2021;Blanc et al, 2021;Arenas et al, 2021;Huang et al, 2022;Ignatiev et al, 2022;Marques-Silva and Ignatiev, 2022;Gorji and Rubin, 2022). 10.…”
unclassified
“…(B) Mapping of features. Boumazouza et al, 2020Boumazouza et al, , 2021Darwiche, 2020;Darwiche andHirth, 2020, 2022;Izza et al, 2020Izza et al, , 2022a, 2021Rago et al, 2020Rago et al, , 2021Shi et al, 2020;Amgoud, 2021;Arenas et al, 2021;Asher et al, 2021;Blanc et al, 2021Blanc et al, , 2022aCooper and Marques-Silva, 2021;Darwiche and Marquis, 2021;Huang et al, 2021aHuang et al, ,b, 2022Ignatiev and Marques-Silva, 2021;Marques-Silva, 2021, 2022;Lorini, 2021, 2022a;Malfa et al, 2021;Wäldchen et al, 2021;Amgoud and Ben-Naim, 2022;Ferreira et al, 2022;Gorji and Rubin, 2022;Marques-Silva and Ignatiev, 2022;Wäldchen, 2022;Yu et al, 2022), and are characterized by formally provable guarantees of rigor, given the underlying ML models. Given such guarantees of rigor, logic-based explainability should be contrasted with well-known model-agnostic approaches to XAI (Ribeiro et al, 2016(Ribeiro et al, , 2018Lundberg and Lee, 2017;Guidotti et al, 2019), which offer no guarantees of rigor.…”
Section: Logic Foundationsmentioning
confidence: 99%
“…Given the above, and as long as the allowed points in feature space are represented by a constraint set, then we can take those constraints into account when computing AXp's and CXp's. The original ideas on accounting for input constraints were presented in recent work [130], and extended more recently for contrastive explanations [323]. However, a major difficulty with the handling input constraints is how to infer those input constraints in the first place.…”
Section: Input Constraints and Distributionsmentioning
confidence: 99%
“…The size of formal explanations have been addressed by considering probabilistic explanations [18,169,170,312,313]. The effect of input constraints on explainability, that restrict the points in feature space to consider, has been studied in recent works [130,323]. The use of surrogate models for computing formal explanations of complex models has been proposed [57].…”
Section: Introductionmentioning
confidence: 99%