2021 IEEE 33rd International Conference on Tools With Artificial Intelligence (ICTAI) 2021
DOI: 10.1109/ictai52525.2021.00233
|View full text |Cite
|
Sign up to set email alerts
|

DisCERN: Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods

Abstract: Counterfactual explanations focus on "actionable knowledge" to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes needed to action an output change in the decision is an interesting challenge for counterfactual explainers. The DisCERN algorithm introduced in this paper is a case-based counter-factua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…similar cases with different class labels (see Figure 1) [7,8]. A NUN represents potential changes to the current problem, with feature attribution prioritising the changes that, when actioned, can lead to a different outcome [8,9]. Focusing on a small number of key "actionable" features is more desirable from a practical standpoint, and has the benefit of reducing the recipient's cognitive burden for understanding the counterfactual.…”
Section: Introductionmentioning
confidence: 99%
“…similar cases with different class labels (see Figure 1) [7,8]. A NUN represents potential changes to the current problem, with feature attribution prioritising the changes that, when actioned, can lead to a different outcome [8,9]. Focusing on a small number of key "actionable" features is more desirable from a practical standpoint, and has the benefit of reducing the recipient's cognitive burden for understanding the counterfactual.…”
Section: Introductionmentioning
confidence: 99%
“…Sparsity calls for minimising the number of modified features, while proximity ensures that the counterfactual instance is as close as possible to the original instance in the feature space, thereby seeking the minimal change necessary to achieve the desired outcome [12]. Both can be addressed either by case-based instance learning [2,5,27] or as parameters within optimisation minimisation techniques [18,25]. Feasibility ensures suggested changes are achievable [20], and plausibility maintains realistic distributions [28].…”
Section: Related Workmentioning
confidence: 99%
“…It serves three primary goals [25]: 1) elucidate the reasoning behind decisions; 2) supply adequate information to critique decisions with negative outcomes; and 3) enable a better understanding of the necessary changes to achieve desired outcomes in the future. There is an abundance of techniques to generate cf-XAI in the literature that achieve some subsets of these three goals [2,9,18,24,27]. The focus of this paper instead is to achieve the third goal as a post-processing step taking into account the user perspective.…”
Section: Introductionmentioning
confidence: 99%
“…Given a query, if a NUN cannot be found, the closest pair is selected and the featurelevel differences between the examples in that pair are used to transform the query into the target. Another approach uses feature-relevance methods like SHAP (Lundberg and Lee 2017) to tailor a feature edit schedule for converting the query into a counterfactual (Wiratunga et al 2021). In this body of work, an imitating function is trained to predict the outcome variable given the instance features.…”
Section: Related Workmentioning
confidence: 99%
“…While drawing NUNs can ensure plausibility, there is no guarantee of proximity: There may be no instances that are sufficiently similar to the query. Several studies have generated counterfactuals for tabular data by interpolating between the query and the NUN (Keane and Smyth 2020; Wiratunga et al 2021). As we are using a generative model, we can perform a similar interpolation in the latent space by interpolating linearly between the latent encodings of the query z q and the NUN z N U N to obtain the interpolated latent representation z ι .…”
Section: Latent Interpolationmentioning
confidence: 99%