Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization 2021
DOI: 10.1145/3450614.3463354
|View full text |Cite
|
Sign up to set email alerts
|

Recent Studies of XAI - Review

Abstract: Over the past years, there has been an increasing concern regarding the risk of bias and discrimination in algorithmic systems, which received significant attention amongst the research communities. To ensure the system's fairness, various methods and techniques have been developed to assess and mitigate potential biases. Such methods, also known as "Formal Fairness", look at various aspects of the system's advanced reasoning mechanism and outcomes, with techniques ranging from local explanations (at feature l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…Works that support the category definition Stage [6], [8]- [11], [13], [15], [22], [23] Model [6], [8], [10], [12], [13], [15], [22], [23] Scope [6], [8], [10], [14], [15], [22] A "post-hoc" XAI method is named after the fact that it acts after predictions are made, not knowing how the predictor model made its decisions (e.g., LIME ( [24]). It is a surrogate model since it tries to simplify the function of the black-box model by sampling, perturbing data, and weighing the distance between instances to generate an approximation of the black-box model.…”
Section: Category For Xai Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Works that support the category definition Stage [6], [8]- [11], [13], [15], [22], [23] Model [6], [8], [10], [12], [13], [15], [22], [23] Scope [6], [8], [10], [14], [15], [22] A "post-hoc" XAI method is named after the fact that it acts after predictions are made, not knowing how the predictor model made its decisions (e.g., LIME ( [24]). It is a surrogate model since it tries to simplify the function of the black-box model by sampling, perturbing data, and weighing the distance between instances to generate an approximation of the black-box model.…”
Section: Category For Xai Methodsmentioning
confidence: 99%
“…The final proposal for a category for XAI methods is Scope, intending to separate XAI methods on whether they are used to help understand the general behavior of the model, that is, if these techniques provide global interpretability or if they try to explain singular or a limited group of instances of data, that is, local interpretability [6]. This category is largely accepted within the reviewed literature, where it is found as a main category for classifying XAI methods [6], [8], [10], [14], [15], [22].…”
Section: Category For Xai Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Complexity-based techniques make machine learning or deep learning models fully interpretable. Interpretability can be categorized into intrinsic interpretability [96] and post hoc interpretability [72] depending on the viewpoint. In general, intrinsic interpretability indicates that a model with a simple architecture can be explained by the trained model itself, whereas post hoc interpretability means that the trained model has a complex architecture and must be retrained to explain this phenomenon.…”
Section: Xai-based Cdssmentioning
confidence: 99%