2020
DOI: 10.48550/arxiv.2009.09092
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evaluation of Local Explanation Methods for Multivariate Time Series Forecasting

Ozan Ozyegen,
Igor Ilic,
Mucahit Cevik

Abstract: Being able to interpret a machine learning model is a crucial task in many applications of machine learning. Specifically, local interpretability is important in determining why a model makes particular predictions. Despite the recent focus on AI interpretability, there has been a lack of research in local interpretability methods for time series forecasting while the few interpretable methods that exist mainly focus on time series classification tasks. In this study, we propose two novel evaluation metrics fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 22 publications
1
1
0
Order By: Relevance
“…Our preliminary analysis indicates that training a single GBR model for each dataset (i.e., similar to the other three models) leads to a significant deterioration in forecasting performance across the datasets, hence we adopt the above-explained approach. We note that above results for these datasets are largely inline with the forecasting performance values reported in previous studies (e.g., see [8,19,23]).…”
Section: Results On Model Performancessupporting
confidence: 91%
See 1 more Smart Citation
“…Our preliminary analysis indicates that training a single GBR model for each dataset (i.e., similar to the other three models) leads to a significant deterioration in forecasting performance across the datasets, hence we adopt the above-explained approach. We note that above results for these datasets are largely inline with the forecasting performance values reported in previous studies (e.g., see [8,19,23]).…”
Section: Results On Model Performancessupporting
confidence: 91%
“…Post-hoc interpretability methods have been used to interpret the decisions of time series models. Mujkanovic [17] used SHAP to interpret time series classifiers, whereas Ozyegen et al [19] evaluated three post-hoc interpretability methods, including SHAP, to interpret the time series forecasting models. On the other hand, many time series forecasting methods take into account interpretability considerations in model development [3,8].…”
Section: Related Workmentioning
confidence: 99%