Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467166
|View full text |Cite
|
Sign up to set email alerts
|

TimeSHAP: Explaining Recurrent Models through Sequence Perturbations

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(18 citation statements)
references
References 17 publications
0
18
0
Order By: Relevance
“…In LSTM-based models, attention is computed over hidden representations across timesteps, which does not provide faithful tokenlevel attribution. Approaches that trace explanations back to individual timesteps (Bento et al, 2020) or input tokens (Tutek and Snajder, 2020) are only just emerging. Therefore, we limit ourselves to an analysis of the raw attention weights.…”
Section: Attention-based Explanationsmentioning
confidence: 99%
“…In LSTM-based models, attention is computed over hidden representations across timesteps, which does not provide faithful tokenlevel attribution. Approaches that trace explanations back to individual timesteps (Bento et al, 2020) or input tokens (Tutek and Snajder, 2020) are only just emerging. Therefore, we limit ourselves to an analysis of the raw attention weights.…”
Section: Attention-based Explanationsmentioning
confidence: 99%
“…Numerous novel methods have been developed to tackle medically-relevant tasks in the time domain, such as: prediction [10,11], causal inference [12,13,14], time-to-event analysis [15,16,17], clustering [18,19] 3 , as well as data imputation [20,21], and model interpretability [22,23] methods, among others. Yet currently a significant limitation exists in the lack of standardization of both data representation and model benchmarking [7,9].…”
Section: Abstract Machine Learning • Time Series • Medicine 1 Time Do...mentioning
confidence: 99%
“…Furthermore, availability of open access data in this field is also improving [4,5,6], attracting significant attention from the artificial intelligence (AI), machine learning (ML) and deep learning (DL) research, as well as the medical data science communities [7,8,9]. A such, it is evident that the temporal setting is becoming the cornerstone for ML in healthcare and medicine, with a significant potential for impact.Numerous novel methods have been developed to tackle medically-relevant tasks in the time domain, such as: prediction [10,11], causal inference [12,13,14], time-to-event analysis [15,16,17], clustering [18,19] 3 , as well as data imputation [20,21], and model interpretability [22,23] methods, among others. Yet currently a significant limitation exists in the lack of standardization of both data representation and model benchmarking [7,9].…”
mentioning
confidence: 99%
“…In LSTM-based models, attention is computed over hidden representations across timesteps, which does not provide faithful token-level importance scores. Approaches that trace explanations back to individual timesteps [41] or input tokens [42] are only just emerging. Therefore, we analyze the raw attention weights for the LSTM-based model we consider below (see Section 4.2).…”
Section: Explanations From Attention Mechanismsmentioning
confidence: 99%