2018
DOI: 10.1109/access.2018.2870052
|View full text |Cite
|
Sign up to set email alerts
|

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

14
2,653
1
44

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 3,802 publications
(2,712 citation statements)
references
References 59 publications
14
2,653
1
44
Order By: Relevance
“…It achieves better prediction, but may not contribute in understanding of the underlying phenomenon. Recently, interpretable machine learning models (explainable AI) are of broad interest [23,24]. In future work, it would be fruitful to extend our method by incorporating some of these ideas for better interpretability.…”
Section: Resultsmentioning
confidence: 99%
“…It achieves better prediction, but may not contribute in understanding of the underlying phenomenon. Recently, interpretable machine learning models (explainable AI) are of broad interest [23,24]. In future work, it would be fruitful to extend our method by incorporating some of these ideas for better interpretability.…”
Section: Resultsmentioning
confidence: 99%
“…Various interpretable ML and post-hoc analysis methods have been developed. Adadi and Berrada [1] provided a comprehensive survey of interpretation methods. Here, we describe only the methods related to tree-based or deep-learning based models.…”
Section: Interpretation Methods For Tree and Deep-learning Based Modelsmentioning
confidence: 99%
“…Even though this NN model had a high accuracy rate, the model determined that pneumonia patients with asthma should not be admitted, thinking that these patients have a lower risk of dying. This dubious prediction is caused by the fact that these severe patients had been aggressively treated in intensive care units and, as a result, they survived at a high rate [1,3]. As shown in this example, the decision relied on an ML model might cause critical harm to patients.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations