Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning 2021
DOI: 10.1007/978-3-030-83356-5_1
|View full text |Cite
|
Sign up to set email alerts
|

Introduction to Interpretability and Explainability

Abstract: In recent years, we have seen gains in adoption of machine learning and artificial intelligence applications. However, continued adoption is being constrained by several limitations. The field of Explainable AI addresses one of the largest shortcomings of machine learning and deep learning algorithms today: the interpretability and explainability of models. As algorithms become more powerful and are better able to predict with better accuracy, it becomes increasingly important to understand how and why a predi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 41 publications
0
4
0
Order By: Relevance
“…When combined with Lundberg and Lee’s 30 ML algorithms, this framework, which works with Shapley values, brought transparency, explainability, and predictability to black-box models. 31 The SHAP works on all possible permutations of the features and calculates the average of the contributions. This study presents an explanatory design with SHAP algorithms for CVD risk factors.…”
Section: Methodsmentioning
confidence: 99%
“…When combined with Lundberg and Lee’s 30 ML algorithms, this framework, which works with Shapley values, brought transparency, explainability, and predictability to black-box models. 31 The SHAP works on all possible permutations of the features and calculates the average of the contributions. This study presents an explanatory design with SHAP algorithms for CVD risk factors.…”
Section: Methodsmentioning
confidence: 99%
“…The term XAI refers to the field of AI research that is dedicated to the generation of explanations with regard to the increasingly complex machine learning models (Montavon et al, 2018;Samek et al, 2019Samek et al, , 2021Holzinger et al, 2022) and is crucial in numerous domains to ensure model safety, robustness, and resilience to data drift. They may also reveal useful correlations in the data as well as ensure that the model results are understood by domain experts (Samek et al, 2021;Lapuschkin et al, 2019;Kamath & Liu, 2021). This field opened the door to numerous impactful contributions across an array of knowledge areas.…”
Section: Introductionmentioning
confidence: 99%
“…Without explaining why a model predicts a mood score, healthcare professionals cannot determine what insights the prediction contains [ 37 ]. These insights can then be used to check a model’s fidelity (whether the model predictions make sense) [ 38 ] and suggest interventions that help manage the symptoms in a personalised fashion.…”
Section: Introductionmentioning
confidence: 99%
“…Recent advances in explainable Artificial Intelligence (XAI) offer solutions to the problem of trustworthiness in ML and DL models. Explainable models (we use the terms explainability and interpretability interchangeably in this work [ 38 ]) such as Decision Trees [ 36 ] can be easily processed/simplified to explain their outputs [ 39 ]. However, their expressive power is limited by their size, and increasing their expressiveness decreases their interpretability.…”
Section: Introductionmentioning
confidence: 99%