2019
DOI: 10.1007/978-3-030-33607-3_49
|View full text |Cite
|
Sign up to set email alerts
|

ALIME: Autoencoder Based Approach for Local Interpretability

Abstract: Machine learning and especially deep learning have garnered tremendous popularity in recent years due to their increased performance over other methods. The availability of large amount of data has aided in the progress of deep learning. Nevertheless, deep learning models are opaque and often seen as black boxes. Thus, there is an inherent need to make the models interpretable, especially so in the medical domain. In this work, we propose a locally interpretable method, which is inspired by one of the recent t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
44
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(44 citation statements)
references
References 13 publications
0
44
0
Order By: Relevance
“…Over time, this may cause a lack of trust in both clinicians unfamiliar with ML principles and patients who are attempting to decide between various therapeutic modalities. 113 Recent guidelines place a substantial focus on enhancing clarity for practitioners, 114 and there are a number of efforts currently underway to help make ML models generally more interpretable, 115 including those utilized within the clinical space. In fact, a whole discipline has been developed that is dedicated to the issue of the interpretability of models for implementation.…”
Section: Creating Digestible Results and A Culture Of Sustainable Innmentioning
confidence: 99%
“…Over time, this may cause a lack of trust in both clinicians unfamiliar with ML principles and patients who are attempting to decide between various therapeutic modalities. 113 Recent guidelines place a substantial focus on enhancing clarity for practitioners, 114 and there are a number of efforts currently underway to help make ML models generally more interpretable, 115 including those utilized within the clinical space. In fact, a whole discipline has been developed that is dedicated to the issue of the interpretability of models for implementation.…”
Section: Creating Digestible Results and A Culture Of Sustainable Innmentioning
confidence: 99%
“…Numerical results illustrating the explanation method on the MNIST dataset are shown in Figures 9,10,11,12. The figures have the same structure as Figures 5, 6, 7, 8, i.e., every figure contains numerical results of 6 experiments depending on the value of important features s for perturbation.…”
Section: Resultsmentioning
confidence: 99%
“…The main intuition of LIME is that the explanation may be derived locally from a set of synthetic instances generated randomly in the neighborhood of the instance to be explained such that every synthetic instance has a weight according to its proximity to the explained instance. Several modifications of LIME have been proposed due to success and simplicity of the method, for example, ALIME [12], NormLIME [13], DLIME [14], Anchor LIME [15], LIME-SUP [16], LIME-Aleph [17], SurvLIME [18]. Garreau and Luxburg [19] proposed a thorough theoretical analysis of LIME.…”
Section: Explanation Modelsmentioning
confidence: 99%
“…One of the most known methods is LIME [16], which is based on random generation of synthetic examples at a local area around a test example and minimizing the difference between predictions corresponding to these examples provided by an explainable black-box model and an approximating linear model. Following this method, a lot of its modifications were proposed due to simplicity of LIME, for example, ALIME [24], NormLIME [25], DLIME [26], Anchor LIME [27], LIME-SUP [28], LIME-Aleph [29], GraphLIME [30]. The idea of using the linear approximation is also implemented in another method called SHAP [31] which is based on a game-theoretic approach and Shapley values [32].…”
Section: Related Workmentioning
confidence: 99%