Proceedings of the Web Conference 2020 2020
DOI: 10.1145/3366423.3380087
|View full text |Cite
|
Sign up to set email alerts
|

Learning Model-Agnostic Counterfactual Explanations for Tabular Data

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
136
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 131 publications
(136 citation statements)
references
References 10 publications
0
136
0
Order By: Relevance
“…The explanation generation is considered a two-fold optimization problem of finding pertinent positives and negatives. Pawelczyk et al make use of an autoencoder architecture for a pretrained classifier performing counterfactual search in the nearest neighbor style [151]. Model-agnostic frameworks are largely found to use decision trees as part of the reasoning mechanism instead of explaining their output.…”
Section: ) Explainability Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…The explanation generation is considered a two-fold optimization problem of finding pertinent positives and negatives. Pawelczyk et al make use of an autoencoder architecture for a pretrained classifier performing counterfactual search in the nearest neighbor style [151]. Model-agnostic frameworks are largely found to use decision trees as part of the reasoning mechanism instead of explaining their output.…”
Section: ) Explainability Methodsmentioning
confidence: 99%
“…Labaien et al calculate the number of changes to switch from the original to the selected contrastive sample following the dataset constraints [141]. To estimate faithfulness of the generated counterfactuals, Pawelczyk et al suggest calculating the so-called degree of difficulty of a counterfactual suggestion to measure how costly it is to achieve the state of the given suggestion [151]. Aiming to provide realistic counterfactuals, Sharma et al introduce the counterfactual explanation robustness-based score defined as the expected distance between the input instances and their corresponding counterfactuals [154].…”
Section: ) Evaluation Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The ODE layer in the generator transforms ⊕ , the concatenation of a noisy vector and a condition vector , into another latent vector ′ that will be fed into the generator (See Section 3.3). reasons, many web-oriented researchers focus on various tasks on tabular data [10,12,27,30,32,45,56,59,62,63]. In this work, generating realistic synthetic tabular data is of our utmost interest.…”
Section: ) (C)mentioning
confidence: 99%