2022
DOI: 10.1109/access.2022.3196917
|View full text |Cite
|
Sign up to set email alerts
|

The Robustness of Counterfactual Explanations Over Time

Abstract: Counterfactual explanations are a prominent example of post-hoc interpretability methods in the explainable Artificial Intelligence (AI) research domain. Differently from other explanation methods, they offer the possibility to have recourse against unfavourable outcomes computed by machine learning models. However, in this paper we show that retraining machine learning models over time may invalidate the counterfactual explanations of their outcomes. We provide a formal definition of this phenomenon and we in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 53 publications
0
4
0
Order By: Relevance
“…Interestingly, robustness is a multidimensional concept that is currently lacking a one-size-fits-all definition. Rather, research discusses what a robust model should do [ 62 65-68 undefined undefined undefined ], investigating how a model should resist different types of perturbations, such as those affecting its input data, data distributions over time, and the model structure. In fact, a robust machine learning model computes predictions that do not vary disproportionately in case of perturbed inputs.…”
Section: Use Of Llm-enhanced Cais By Individuals With Depression: 2 C...mentioning
confidence: 99%
See 1 more Smart Citation
“…Interestingly, robustness is a multidimensional concept that is currently lacking a one-size-fits-all definition. Rather, research discusses what a robust model should do [ 62 65-68 undefined undefined undefined ], investigating how a model should resist different types of perturbations, such as those affecting its input data, data distributions over time, and the model structure. In fact, a robust machine learning model computes predictions that do not vary disproportionately in case of perturbed inputs.…”
Section: Use Of Llm-enhanced Cais By Individuals With Depression: 2 C...mentioning
confidence: 99%
“…In summary, robustness is a key requirement for trustworthy AI. It can also be extended to comprise algorithms that provide explanations of machine learning models’ predictions [ 65 66 68 70 ]. In this case, robust explanations are not altered by the perturbation of data inputs and are stable over time.…”
Section: Use Of Llm-enhanced Cais By Individuals With Depression: 2 C...mentioning
confidence: 99%
“…For instance, some gradient-based attribution techniques fail to satisfy intuitive properties (like implementation invariance Sundararajan et al, 2017) or ignore information at the top layers of neural networks (Adebayo et al, 2018), while sampling-based alternatives may suffer from high variance (Teso, 2019;Zhang et al, 2019). This is further aggravated by the fact that explanation techniques are not robust: slight changes in input, model, or hyper-parameters of the explainer, can yield very different explanations (Artelt et al, 2021;Ferrario and Loi, 2022b;Virgolin and Fracaros, 2023). A number of other issues have been identified in the literature (Hooker et al, 2019;Kindermans et al, 2019;Adebayo et al, 2020;Kumar et al, 2020;Sixt et al, 2020).…”
Section: Semantics and Faithfulnessmentioning
confidence: 99%
“…With this same approach, a feedback loop can exist to update the model design characteristics to match actionability goals. Drawing parallels with counterfactual actionability [6] and extending the idea of the local, cohort, and global explainability of model predictions [14] to actionability, we consider global actionability G.A., as the average leeway available to recipient-users captured in the dataset and specific to the prediction model, to modify their initially captured feature variables in other to affect subsequent predictions or evaluations. An important point to make is that a requirement to be able to flip a model's decision for every R-user is not explicitly guaranteed in this definition, as might be the case with definitions of actionable recourse [19] and counterfactual suggestions [6], [18].…”
Section: Mapping the Boundaries Of Actionability Measurement A A Fram...mentioning
confidence: 99%