2021
DOI: 10.48550/arxiv.2106.00546
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Explanations With Relevant Sets

Yacine Izza,
Alexey Ignatiev,
Nina Narodytska
et al.

Abstract: Recent work proposed δ-relevant inputs (or sets) as a probabilistic explanation for the predictions made by a classifier on a given input. δ-relevant sets are significant because they serve to relate (model-agnostic) Anchors with (model-accurate) PI-explanations, among other explanation approaches. Unfortunately, the computation of smallest size δ-relevant sets is complete for NP PP , rendering their computation largely infeasible in practice. This paper investigates solutions for tackling the practical limita… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…Logic-based (or formal) explanation approaches have been studied in a growing body of research in recent years (Shih et al, 2018 , 2019 ; Ignatiev et al, 2019a , b , c , 2020a , 2022 ; Narodytska et al, 2019 ; Wolf et al, 2019 ; Audemard et al, 2020 , 2021 , 2022a , b ; Boumazouza et al, 2020 , 2021 ; Darwiche, 2020 ; Darwiche and Hirth, 2020 , 2022 ; Izza et al, 2020 , 2021 , 2022a , b ; Marques-Silva et al, 2020 , 2021 ; Rago et al, 2020 , 2021 ; Shi et al, 2020 ; Amgoud, 2021 ; Arenas et al, 2021 ; Asher et al, 2021 ; Blanc et al, 2021 , 2022a , b ; Cooper and Marques-Silva, 2021 ; Darwiche and Marquis, 2021 ; Huang et al, 2021a , b , 2022 ; Ignatiev and Marques-Silva, 2021 ; Izza and Marques-Silva, 2021 , 2022 ; Liu and Lorini, 2021 , 2022a ; Malfa et al, 2021 ; Wäldchen et al, 2021 ; Amgoud and Ben-Naim, 2022 ; Ferreira et al, 2022 ; Gorji and Rubin, 2022 ; Huang and Marques-Silva, 2022 ; Marques-Silva and Ignatiev, 2022 ; Wäldchen, 2022 ; Yu et al, 2022 ), and are characterized by formally provable guarantees of rigor, given the underlying ML models. Given such guarantees of rigor, logic-based explainability should be contrasted with well-known model-agnostic approaches to XAI (Ribeiro et al, 2016 , 2018 ; Lundberg and Lee, 2017 ; Guidotti et al, 2019 ), which offer no guarantees of rigor.…”
Section: Logic-based Explainable Aimentioning
confidence: 99%
“…Logic-based (or formal) explanation approaches have been studied in a growing body of research in recent years (Shih et al, 2018 , 2019 ; Ignatiev et al, 2019a , b , c , 2020a , 2022 ; Narodytska et al, 2019 ; Wolf et al, 2019 ; Audemard et al, 2020 , 2021 , 2022a , b ; Boumazouza et al, 2020 , 2021 ; Darwiche, 2020 ; Darwiche and Hirth, 2020 , 2022 ; Izza et al, 2020 , 2021 , 2022a , b ; Marques-Silva et al, 2020 , 2021 ; Rago et al, 2020 , 2021 ; Shi et al, 2020 ; Amgoud, 2021 ; Arenas et al, 2021 ; Asher et al, 2021 ; Blanc et al, 2021 , 2022a , b ; Cooper and Marques-Silva, 2021 ; Darwiche and Marquis, 2021 ; Huang et al, 2021a , b , 2022 ; Ignatiev and Marques-Silva, 2021 ; Izza and Marques-Silva, 2021 , 2022 ; Liu and Lorini, 2021 , 2022a ; Malfa et al, 2021 ; Wäldchen et al, 2021 ; Amgoud and Ben-Naim, 2022 ; Ferreira et al, 2022 ; Gorji and Rubin, 2022 ; Huang and Marques-Silva, 2022 ; Marques-Silva and Ignatiev, 2022 ; Wäldchen, 2022 ; Yu et al, 2022 ), and are characterized by formally provable guarantees of rigor, given the underlying ML models. Given such guarantees of rigor, logic-based explainability should be contrasted with well-known model-agnostic approaches to XAI (Ribeiro et al, 2016 , 2018 ; Lundberg and Lee, 2017 ; Guidotti et al, 2019 ), which offer no guarantees of rigor.…”
Section: Logic-based Explainable Aimentioning
confidence: 99%