2020
DOI: 10.1609/aaai.v34i04.5769
|View full text |Cite
|
Sign up to set email alerts
|

Time2Graph: Revisiting Time Series Modeling with Dynamic Shapelets

Abstract: Time series modeling has attracted extensive research efforts; however, achieving both reliable efficiency and interpretability from a unified model still remains a challenging problem. Among the literature, shapelets offer interpretable and explanatory insights in the classification tasks, while most existing works ignore the differing representative power at different time slices, as well as (more importantly) the evolution pattern of shapelets. In this paper, we propose to extract time-aware shapelets by de… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(23 citation statements)
references
References 34 publications
1
22
0
Order By: Relevance
“…To satisfy this need, we propose a twophase approach to create the MAFNs from EEG readings. Furthermore, according to different calculation indexes of each stage, three kinds of are constructed, based on dynamic time warping [34] (MAFN-dtw), symbolic mutual information [35,36] (MAFN-smi), and hub depressed index [37] (MAFN-HDI) respectively.…”
Section: Construction Of Mafnsmentioning
confidence: 99%
See 1 more Smart Citation
“…To satisfy this need, we propose a twophase approach to create the MAFNs from EEG readings. Furthermore, according to different calculation indexes of each stage, three kinds of are constructed, based on dynamic time warping [34] (MAFN-dtw), symbolic mutual information [35,36] (MAFN-smi), and hub depressed index [37] (MAFN-HDI) respectively.…”
Section: Construction Of Mafnsmentioning
confidence: 99%
“…First, to reveal temporal regularities an idea from network science (see [30,31]) is adopted for building a network from a single time series. As illustrated in Figure 1, we use a sliding window to split all EEG time series (Figure 1B) into m subsequences (Figure 1C), and then identify from these m subsequences the representative sub-sequences (RSs) through the idea similar to clustering: 1) Calculate the similarity (measured by dynamic time warping (DTW) [34] or symbolic mutual information (SMI) [35,36]) between each pair of sub-sequences, and for each sub-sequence we select its k most similar (other) sub-sequences; 2) In the mk selected sub-sequences, some are repeated. Hence we pick the top k ones who occur most frequently.…”
Section: Construction Of Mafnsmentioning
confidence: 99%
“…These methods ignore the relationships among the time series, thus making it difficult to understand the causes of outliers. In a previous study [ 9 ], time-aware shapelets were extracted to construct an evolution graph to detect time series outliers. The approach was applied on a single signal, and outliers were detected only by comparing the signal with itself.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, self-supervised learning has attracted more and more attention in computer vision by designing different pretext tasks on image data such as solving jigsaw puzzles (Noroozi & Favaro, 2016), inpainting (Pathak et al, 2016), rotation prediction (Gidaris et al, 2018), and contrastive learning of visual representations (Chen et al, 2020), and on video data such as object tracking Here, an example of 3-scale temporal relations including short-term, middle-term and long-term temporal relation is given for illustration. (Wang & Gupta, 2015), and pace prediction (Wang et al, 2020). Although some video-based approaches attempt to capture temporal information in the designed pretext task, time series is far different structural data compared with video.…”
Section: Introductionmentioning
confidence: 99%
“…Ensemble-based methods aims at combining multiple classifiers for higher classification performance. More recently, deep learning based methods (Karim et al, 2017;Ma et al, 2019;Cheng et al, 2020) conduct classification by cascading the feature extractor and classifier based on MLP, RNN, and CNN in an end-to-end manner. Our approach focuses instead on self-supervised representation learning of time series on unlabeled data, exploiting inter-sample relation and intra-temporal relation of time series to guide the generation of useful feature.…”
Section: Introductionmentioning
confidence: 99%