2021
DOI: 10.48550/arxiv.2107.04680
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data

Raphael Mazzine,
David Martens

Abstract: Counterfactual explanations are viewed as an effective way to explain machine learning predictions. This interest is reflected by a relatively young literature with already dozens of algorithms aiming to generate such explanations. These algorithms are focused on finding how features can be modified to change the output classification. However, this rather general objective can be achieved in different ways, which brings about the need for a methodology to test and benchmark these algorithms. The contributions… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 42 publications
0
1
0
Order By: Relevance
“…Also, methods up to that point in time are often missing user studies and comparative tests. A closer look at how to tackle comparative studies was taken by Mazzine and Martens (2021) in their extensive survey. They benchmarked open source implementations of 10 strategies for counterfactual generation for DNNs on 22 different tabular datasets.…”
Section: Counterfactual and Contrastive Explanationsmentioning
confidence: 99%
“…Also, methods up to that point in time are often missing user studies and comparative tests. A closer look at how to tackle comparative studies was taken by Mazzine and Martens (2021) in their extensive survey. They benchmarked open source implementations of 10 strategies for counterfactual generation for DNNs on 22 different tabular datasets.…”
Section: Counterfactual and Contrastive Explanationsmentioning
confidence: 99%