2020
DOI: 10.1093/jamia/ocaa053
|View full text |Cite
|
Sign up to set email alerts
|

Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review

Abstract: Abstract Objective To conduct a systematic scoping review of explainable artificial intelligence (XAI) models that use real-world electronic health record data, categorize these techniques according to different biomedical applications, identify gaps of current studies, and suggest future research directions. Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
128
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 222 publications
(132 citation statements)
references
References 67 publications
3
128
0
1
Order By: Relevance
“…Decisions in such systems can be more optimized, but gaps exist in appropriate explanations in the context of expert medical knowledge on how such decisions are derived. Development of effective explainable methods requires more interdisciplinary cooperation between professionals from different domains, such as machine learning researchers and medical experts, 20 and the effectiveness of the interpretation of explainable CDSSs must be evaluated based on how its interpretation helps human users, which may require human‐in‐the‐loop psychology experiments to measure mental models, task performance, user satisfaction, and appropriate trust 21 …”
Section: Clinical Decision Support Systems Interpretability and Adamentioning
confidence: 99%
“…Decisions in such systems can be more optimized, but gaps exist in appropriate explanations in the context of expert medical knowledge on how such decisions are derived. Development of effective explainable methods requires more interdisciplinary cooperation between professionals from different domains, such as machine learning researchers and medical experts, 20 and the effectiveness of the interpretation of explainable CDSSs must be evaluated based on how its interpretation helps human users, which may require human‐in‐the‐loop psychology experiments to measure mental models, task performance, user satisfaction, and appropriate trust 21 …”
Section: Clinical Decision Support Systems Interpretability and Adamentioning
confidence: 99%
“…Not all algorithms are inscrutable. 47 Google, for instance, published an evaluation of the model used by its algorithm for assessing retinal images. 48 As LHS develop, policies could be put in place that prohibit black box algorithms.…”
Section: Conflict #3: Transparency and Machine Explanationsmentioning
confidence: 99%
“…For more reading on this we refer the interested audience to a recent systematic review on the explainable AI models using EHR data. 23 For example, in a logistic regression model for binary outcome, the coefficients of the features (predictors) can be readily transformed into odds ratios and can be easily understood as feature importance. Nevertheless, as these coefficients are estimated from the input data, when the data points were replaced with different imputation techniques, the extent to which missing data points are extrapolated has certain impact on the interpretation of these coefficients.…”
Section: Introductionmentioning
confidence: 99%
“…12 Researchers in the field have taken different approaches to address the interpretability of machine learning models, for instance, feature interaction and importance, attention mechanism, data dimensionality reduction, knowledge distillation and rule extraction. 23 Nevertheless, there are still some fundamental issues that need to be addressed such as fidelity of the post-hoc interpretation methods to the reference model, evaluation of the interpretation methods, and design biases due to focusing on the intuition of researchers rather than real end-users’ (medical professionals in this context) needs. For more reading on this we refer the interested audience to a recent systematic review on the explainable AI models using EHR data.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation