2019
DOI: 10.1007/s10618-019-00633-3
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable time series classification using linear models and multi-resolution multi-domain symbolic representations

Abstract: The time series classification literature has expanded rapidly over the last decade, with many new classification approaches published each year. Prior research has mostly focused on improving the accuracy and efficiency of classifiers, with interpretability being somewhat neglected. This aspect of classifiers has become critical for many application domains and the introduction of the EU GDPR legislation in 2018 is likely to further emphasize the importance of interpretable learning algorithms. Currently, sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
67
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 102 publications
(78 citation statements)
references
References 41 publications
2
67
0
Order By: Relevance
“…The current approaches to time series classification that exploit one or more of these representations can be grouped into four categories: modular heterogeneous ensembles where each module consists of a classifier built on a particular transformation type such as HIVE-COTE; tree based homogeneous ensembles where different data representations are embedded within the nodes of the tree Shifaz et al (2020); deep learning algorithms where the representations are embedded in the network (Fawaz et al 2019); and transformation/ convolution approaches that create massive new feature spaces that are parsed with a linear classifier (Dempster et al 2020;Nguyen et al 2019). The most effective algorithms exploit one or more representations.…”
Section: Introductionmentioning
confidence: 99%
“…The current approaches to time series classification that exploit one or more of these representations can be grouped into four categories: modular heterogeneous ensembles where each module consists of a classifier built on a particular transformation type such as HIVE-COTE; tree based homogeneous ensembles where different data representations are embedded within the nodes of the tree Shifaz et al (2020); deep learning algorithms where the representations are embedded in the network (Fawaz et al 2019); and transformation/ convolution approaches that create massive new feature spaces that are parsed with a linear classifier (Dempster et al 2020;Nguyen et al 2019). The most effective algorithms exploit one or more representations.…”
Section: Introductionmentioning
confidence: 99%
“…These explanations would also be the ones produced by posthoc methods such as LIME [13]. For lack of space, we do not show examples of such explanations but the interested reader can find many examples in [25] or in [20].…”
Section: B Qualitative Results For Explainabilitymentioning
confidence: 99%
“…Finally [20] also proposes a time series classification method. The authors propose to extract various symbolic representations from the time series and train a logistic regression model on top of these representations.…”
Section: B Model Interpretabilitymentioning
confidence: 99%
“…They showed the relevance of a SD approach to create interpretable rules used further in classification in the context of energy consumption. Nguyen et al [22] propose a method that learns a linear classifier on discretized data (SAX or SFA). A feature (pattern) with the largest weight is considered the most discriminating.…”
Section: More Related Workmentioning
confidence: 99%