2021
DOI: 10.3390/e23020143
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking Attention-Based Interpretability of Deep Learning in Multivariate Time Series Predictions

Abstract: The adaptation of deep learning models within safety-critical systems cannot rely only on good prediction performance but needs to provide interpretable and robust explanations for their decisions. When modeling complex sequences, attention mechanisms are regarded as the established approach to support deep neural networks with intrinsic interpretability. This paper focuses on the emerging trend of specifically designing diagnostic datasets for understanding the inner workings of attention mechanism based deep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 23 publications
0
13
0
Order By: Relevance
“…Another contribution by Barić et al [4], "Benchmarking Attention-Based Interpretability of Deep Learning in Multivariate Time Series Predictions," sets the newly emerging trend of specifically designing diagnostic datasets for understanding the inner workings of attention mechanism based deep learning models for multivariate forecasting tasks. The authors designed a novel benchmark of synthetically designed datasets with the transparent underlying generating process of multiple time series interactions with increasing complexity.…”
Section: Contributions To Intelligence Augmentationmentioning
confidence: 99%
“…Another contribution by Barić et al [4], "Benchmarking Attention-Based Interpretability of Deep Learning in Multivariate Time Series Predictions," sets the newly emerging trend of specifically designing diagnostic datasets for understanding the inner workings of attention mechanism based deep learning models for multivariate forecasting tasks. The authors designed a novel benchmark of synthetically designed datasets with the transparent underlying generating process of multiple time series interactions with increasing complexity.…”
Section: Contributions To Intelligence Augmentationmentioning
confidence: 99%
“…We use the implementation available in Pytorch 4 . Indeed, according to a previous study (Barić et al, 2021), it seems to be the only model with both a satisfying performance score and correct interpretability, capturing both autocorrelations and crosscorrelations between multiple time series. Interestingly, while evaluating IMV-LSTM on simulated data from statistical and mechanistic models, the correctness of interpretability increases with more complex datasets.…”
Section: Imv-lstmmentioning
confidence: 79%
“…A recent study showed that the input attention mechanism of the DA-RNN does not reflect the causal interactions between the variables composing the system [ 57 ]. This is also supported by the extremely small values of the correlation coefficient between the true interaction matrix and the input attention matrix of each gene regulatory network, reported in Table 2 .…”
Section: Resultsmentioning
confidence: 99%