2022
DOI: 10.1016/j.procs.2022.08.105
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI and Interpretable Machine Learning: A Case Study in Perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(6 citation statements)
references
References 7 publications
0
6
0
Order By: Relevance
“…The SHAP framework plays a pivotal role in enhancing the interpretability of machine learning models, thereby facilitating responsible and ethical use of AI systems [42]. Details related to the mathematical framework underlying the calculation of SHAP values are presented in Ref.…”
Section: Model-agnostic Explainability Using Shap Frameworkmentioning
confidence: 99%
“…The SHAP framework plays a pivotal role in enhancing the interpretability of machine learning models, thereby facilitating responsible and ethical use of AI systems [42]. Details related to the mathematical framework underlying the calculation of SHAP values are presented in Ref.…”
Section: Model-agnostic Explainability Using Shap Frameworkmentioning
confidence: 99%
“…To examine the model without affecting its efficiency, the method employs post hoc and intrinsic methods after training. A widely used explainable artificial intelligence method called local interpretable model-agnostic explanations (LIME) is adopted to explain the functioning of ML and deep learning (DL) models by providing localized, model-agnostic interpretations [18,19].…”
Section: Introductionmentioning
confidence: 99%
“…XAI emphasizes that AI should have explainability, meaning that the model and its results can be explained in a way that humans can understand. XAI allows the interpretation of the learned model and analyzes its logical flow by focusing on the reasons that the system has a given demerit [ 7 ]. Simultaneously, a growing number of experts at the intersection of AI and healthcare have concluded that the ability of AI models to provide explanations to humans is more important than their accuracy when it comes to practical applications in clinical settings.…”
Section: Introductionmentioning
confidence: 99%