2016
DOI: 10.1007/978-3-319-49055-7_38
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Interpretable Pattern Discovery in Time Series Using Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(22 citation statements)
references
References 22 publications
0
22
0
Order By: Relevance
“…In [16] Benamara et al for example applied convolutional NNs to extract human emotions based on facial images. Autoencoders like in [17][18][19] are able to learn important features of the input data by first encoding the input data to a lower dimensional space and then decoding it to reconstruct the input. While the unsupervised extraction of patterns from image or video data is a common task for Convolutional Autoencoders (CAE), the pattern discovery in time series is an underrepresented problem.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In [16] Benamara et al for example applied convolutional NNs to extract human emotions based on facial images. Autoencoders like in [17][18][19] are able to learn important features of the input data by first encoding the input data to a lower dimensional space and then decoding it to reconstruct the input. While the unsupervised extraction of patterns from image or video data is a common task for Convolutional Autoencoders (CAE), the pattern discovery in time series is an underrepresented problem.…”
Section: Related Workmentioning
confidence: 99%
“…While the unsupervised extraction of patterns from image or video data is a common task for Convolutional Autoencoders (CAE), the pattern discovery in time series is an underrepresented problem. In the following sections we take up some ideas regarding the architecture of CAEs from [17] and complement them by multiple additional methods to detect recurring patterns in time series data.…”
Section: Related Workmentioning
confidence: 99%
“…Autoencoders are often combined with sequential data. It learns interpretable representations based on the multi-scale property of sequential data information [85], [86].…”
Section: ) Autoencodermentioning
confidence: 99%
“…This makes the deep learning model itself not a complete black box and the interpretable approaches based on the DL model are unique. [16] local linear model feature CPAR [17] global decision tree/rule feature Trepan [22], [23], [24], REFNE [26] global decision tree/rule √ feature [19], [20], [21] global decision tree/rule feature Anchors [30], PALM [31], [32] local decision tree/rule √ feature MMD-critic [37] global data point √ data point influence function [38] global data point data point SHAP [40], [42] local Shapley value √ feature [48], [49] global KG feature [50], [51], [52] global KG semantic relations RKGE [53], KPRN [54] global KG decision Path [56] local KG semantic relations [57] local KG semantic relations [64] local NN feature CAM [71], Grad-CAM [70], DeepLIFT [72], LRP [73], IBD [74] local NN feature SVCCA [75] global NN neuronal relations ACD [76] global/l ocal NN feature [78], [79], [80] global NN neuronal semantic [81], [82], [83] local NN feature [85], [86], [87] local NN data/feature…”
Section: A Comparison Analysismentioning
confidence: 99%
“…The method presented by Bascol et.al. [11] studied the use of Convolutional Auto-encoders for unsupervised mining recurrent temporal patterns mixed in multivariate time series.…”
Section: Automatic Periodic Motif Detectionmentioning
confidence: 99%