Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2014
DOI: 10.1145/2623330.2623613
|View full text |Cite
|
Sign up to set email alerts
|

Learning time-series shapelets

Abstract: Shapelets are discriminative sub-sequences of time series that best predict the target variable. For this reason, shapelet discovery has recently attracted considerable interest within the time-series research community. Currently shapelets are found by evaluating the prediction qualities of numerous candidates extracted from the series segments. In contrast to the state-of-the-art, this paper proposes a novel perspective in terms of learning shapelets. A new mathematical formalization of the task via a classi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
301
0
2

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 368 publications
(320 citation statements)
references
References 18 publications
3
301
0
2
Order By: Relevance
“…The shapelets are typically first extracted from the data, then used as a new representation in which to learn standard classifiers such as decision trees, random forest and SVM [16]. Approaches for finding the best shapelets vary from brute force search, to bounding quality metrics such as the information gain [15], searching in a lower dimensional SAX space [17], or learning discriminative fixedlength shapelets [7]. These methods deliver high accuracy but have a high computational complexity for training or testing and many are sensitive to noise [4].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The shapelets are typically first extracted from the data, then used as a new representation in which to learn standard classifiers such as decision trees, random forest and SVM [16]. Approaches for finding the best shapelets vary from brute force search, to bounding quality metrics such as the information gain [15], searching in a lower dimensional SAX space [17], or learning discriminative fixedlength shapelets [7]. These methods deliver high accuracy but have a high computational complexity for training or testing and many are sensitive to noise [4].…”
Section: Related Workmentioning
confidence: 99%
“…This approach was originally designed for classification of sequences of discrete items, such as text or DNA. An important aspect of this approach is that it can efficiently select the best variable-length subsequences as driven by the training data and loss function (by combining learning and feature selection), and does not require a user to provide candidate subsequence lengths, such as in [7]. SEQL was shown to perform well in dense feature spaces and with very long sequences and large vocabularies.…”
Section: Classification With Sequence Learnermentioning
confidence: 99%
See 1 more Smart Citation
“…Despite the existence of many other approaches for time series classification [38], we use shapelets in our work, as (i) they find local and discriminative features from the data, (ii) they impose no assumptions on the nature of the data unlike autoregressive or ARIMA time series models [38,39] and they work even on non-stationary time series, (iii) they work on data instances of different lengths (unlike popular classifiers such as support vector machines, feed-forward neural networks, and random forests in their standard forms), (iv) they are easy to interpret and visualize for domain experts, and (v) they have been shown to be more accurate than other methods for some datasets [11,12,15,18,[20][21][22][23][24]27,39].…”
Section: Figure 2 a Shapelet Found From Our Datasetmentioning
confidence: 99%
“…We consider a binary (two-class) classification scenario. Time series shapelets were first proposed by Ye and Keogh [39] and there have been optimizations on the initial method to make it faster or more advanced [12,15,18,20,24,27].…”
Section: Background On Shapeletsmentioning
confidence: 99%