Proceedings of the ACM Conference on Health, Inference, and Learning 2020
DOI: 10.1145/3368555.3384460
|View full text |Cite
|
Sign up to set email alerts
|

Explaining an increase in predicted risk for clinical alerts

Abstract: Much work aims to explain a model's prediction on a static input. We consider explanations in a temporal setting where a stateful dynamical model produces a sequence of risk estimates given an input at each time step. When the estimated risk increases, the goal of the explanation is to attribute the increase to a few relevant inputs from the past.While our formal setup and techniques are general, we carry out an in-depth case study in a clinical setting. The goal here is to alert a clinician when a patient's r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…The interpretability of the time series forecasting model is of great importance for many downstream tasks [88,38,44]. Through disentangling the latent variables of the generative model, not only the interpretability but also the reliability of the prediction can be further enhanced [64].…”
Section: Disentangling Latent Variables For Interpretationmentioning
confidence: 99%
“…The interpretability of the time series forecasting model is of great importance for many downstream tasks [88,38,44]. Through disentangling the latent variables of the generative model, not only the interpretability but also the reliability of the prediction can be further enhanced [64].…”
Section: Disentangling Latent Variables For Interpretationmentioning
confidence: 99%
“…33 constructs synthetic datasets with feature importance labels for evaluation. 34 obtains feature importance labels from both manually constructed tasks and domain experts. 35 derives importance labels from tasks with graph-valued data with computable ground truths.…”
Section: Evaluation Of Feature Importance Interpretationmentioning
confidence: 99%
“…Recent work [23,30,31] have identified some limitations in time series interpretability. We provide the first benchmark that systematically evaluates different saliency methods across multiple neural architectures in a multivariate time series setting, identifies common limitations, and proposes a solution to adapt existing methods to time series.…”
Section: Background and Related Workmentioning
confidence: 99%