2020
DOI: 10.1007/978-3-030-58112-1_31
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Objective Counterfactual Explanations

Abstract: Counterfactual explanations are one of the most popular methods to make predictions of black box machine learning models interpretable by providing explanations in the form of 'what-if scenarios'. Most current approaches optimize a collapsed, weighted sum of multiple objectives, which are naturally difficult to balance a-priori. We propose the Multi-Objective Counterfactuals (MOC) method, which translates the counterfactual search into a multi-objective optimization problem. Our approach not only returns a div… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
136
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 170 publications
(138 citation statements)
references
References 24 publications
1
136
0
1
Order By: Relevance
“…Others (e.g., onehot encoding) mitigate the computational problem at the expense of neglecting the complex relationships between and within categorical variables [10]. Recently, researchers have proposed to utilize genetic algorithms to generate counterfactual explanations [23,24]. While capable of generating foils for mixed data, these approaches neither capture the full complexity of categorical variables nor effectively address desired characteristics of explanations.…”
Section: Methods For the Generation Of Coherent Counterfactual Explanmentioning
confidence: 99%
“…Others (e.g., onehot encoding) mitigate the computational problem at the expense of neglecting the complex relationships between and within categorical variables [10]. Recently, researchers have proposed to utilize genetic algorithms to generate counterfactual explanations [23,24]. While capable of generating foils for mixed data, these approaches neither capture the full complexity of categorical variables nor effectively address desired characteristics of explanations.…”
Section: Methods For the Generation Of Coherent Counterfactual Explanmentioning
confidence: 99%
“…As we have seen, the seminal work on CF explanation [Wacheter et al, 2018;Mittelstadt et al, 2019] proposes perturbing the features of synthetic CF instances, under a loss function balancing proximity to the test-instance against proximity to the decision boundary for the CF class, using a scaled L1-norm distance-metric. This idea has inspired follow-on work using different distance metrics (e.g., L2-norm) or, indeed, combinations of distance metrics [Dandl et al, 2020;Artelt and Hammer, 2020], with added constraints to deliver diverse CFs [Mothilal et al, 2020;. Hence, later, we will argue for the use of selected distance metrics to benchmark evaluations (ideally, ones that are psychologically grounded).…”
Section: Counterfactual Insightsmentioning
confidence: 99%
“…Laugel et al [2019] showed that one type of "bad" CF (i.e., out-of-distribution items) can be as high as 36% for some CF methods and Delaney et al [2020] have shown that even close, low-sparcity CFs can be out-of-distribution (see Figure 2). In the 100 systems reviewed here, we found that only 22% report "coverage results", though the definitions of the concept differ [Keane and Smyth, 2020;Schleich et al, 2021;Dandl et al, 2020].…”
Section: Deficit #4: Covering Coveragementioning
confidence: 99%
“…data distribution and the resulting counterfactual are connected via high-density paths to the explained instance. Dandl et al [82] proposed a Multi-Objective Counterfactuals (MOC) method which translates the counterfactual search into a multi-objective optimization problem. The proposed approach returns a diverse set of counterfactuals with different trade-offs between the proposed objectives and maintains diversity in the feature space.…”
Section: ) Model-agnostic Counterfactual Explanationsmentioning
confidence: 99%