2021
DOI: 10.3390/make3030027
|View full text |Cite
|
Sign up to set email alerts
|

Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability

Abstract: Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
68
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 151 publications
(68 citation statements)
references
References 27 publications
0
68
0
Order By: Relevance
“…For the LIME stability assessment, additional indicators may be helpful; they make it possible to increase confidence in the achieved results of calculations and to avoid cases when different explanations for the same forecasts are obtained [53]. One of the possible ways is to reduce instability in the obtained explanations, e.g., by replacing random perturbation of data with agglomerative hierarchical clustering (AHC) [54]. The robust model interpretability can sometimes be difficult due to the application of local approximation based on linear models, which may be inadequate for many analyzed problems.…”
Section: Resultsmentioning
confidence: 99%
“…For the LIME stability assessment, additional indicators may be helpful; they make it possible to increase confidence in the achieved results of calculations and to avoid cases when different explanations for the same forecasts are obtained [53]. One of the possible ways is to reduce instability in the obtained explanations, e.g., by replacing random perturbation of data with agglomerative hierarchical clustering (AHC) [54]. The robust model interpretability can sometimes be difficult due to the application of local approximation based on linear models, which may be inadequate for many analyzed problems.…”
Section: Resultsmentioning
confidence: 99%
“…After deriving the SVM-based regression model for battery electrode mass loading prediction, to further explain these related predictions, the local interpretable model-agnostic explanation (LIME) is utilized. It should be known that LIME belongs to a model-agnostic solution which could mimic the underlying behaviors of a black box model for generating the explanation of the related prediction (Zafar and Khan, 2021).…”
Section: Local Interpretable Model-agnostic Explanationsmentioning
confidence: 99%
“…Nellithimaru proposed a new grape yield estimation method that combines instance segmentation and SLAM to obtain grape information [21], which improves the accuracy. However, the above mentioned deep neural network-based methods still have the problem of low robustness when analyzing characterization information such as the size of grape berries from dense and unconstrained grape images [22].…”
Section: Introductionmentioning
confidence: 99%