2015
DOI: 10.1109/tkde.2015.2416723
|View full text |Cite
|
Sign up to set email alerts
|

Time-Series Classification with COTE: The Collective of Transformation-Based Ensembles

Abstract: Recently, two ideas have been explored that lead to more accurate algorithms for time-series classification (TSC). First, it has been shown that the simplest way to gain improvement on TSC problems is to transform into an alternative data space where discriminatory features are more easily detected. Second, it was demonstrated that with a single data representation, improved accuracy can be achieved through simple ensemble schemes. We combine these two principles to test the hypothesis that forming a collectiv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
239
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 377 publications
(239 citation statements)
references
References 34 publications
0
239
0
Order By: Relevance
“…RISE computes four different transformations for each random interval selected: Autocorrelation Function (ACF), Partial Autocorrelation Function (PACF), and Autoregressive model (AR) which extracts features in time domain, and Power Spectrum (PS) which extracts features in the frequency domain [30,1]. Coefficients of these functions are used to form a new transformed feature vector.…”
Section: Interval-based Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…RISE computes four different transformations for each random interval selected: Autocorrelation Function (ACF), Partial Autocorrelation Function (PACF), and Autoregressive model (AR) which extracts features in time domain, and Power Spectrum (PS) which extracts features in the frequency domain [30,1]. Coefficients of these functions are used to form a new transformed feature vector.…”
Section: Interval-based Techniquesmentioning
confidence: 99%
“…Two leading algorithms that combine multiple transformations are Flat Collective of Transformation-Based Ensembles (FLAT-COTE) [1] and the more recent variant Hierarchical Vote COTE (HIVE-COTE) [30].] FLAT-COTE is a meta-ensemble of 35 different classifiers that use different time series classification methods such as similarity-based, shapelet-based, and interval-based techniques.…”
Section: Combinations Of Transformationsmentioning
confidence: 99%
“…Finally, ensemble-based methods, such as COTE [3] or HIVE-COTE [16], that rely on several of the above-presented standalone classifiers are now considered state-of-the-art for the TSC task. Note however that these methods tend to be computationally expensive, with high memory usage and difficult to interpret (as stated in Section 1) due to the combination of many different core classifiers.…”
Section: Time Series Classificationmentioning
confidence: 99%
“…it is difficult to determine what particular behavior in a time series triggered the classification decision. Note that the same interpretability issue arises with ensemble classifiers such as [3] where one decision depends on the presence of multiple shapelets. One Figure 1: Example test time series and three most discriminative shapelets used for its classification for a baseline [11] (top) and for our proposed AI↔PR-CNN model (bottom) on the Herring classification problem.…”
Section: Introductionmentioning
confidence: 97%
“…To contrast with multiple types of classifiers, we have made an effort to consider some classical and effective methods, including (1) distance-based methods:1-NN with Euclidean distance (1NN-ED) [9] and 1-NN with DTW (1NN-DTW) [10], (2) feature-based methods: shapelets [11] and random forests based on features (Features-RF) [12], (3) ANN-based methods: feedforward neural networks (FNNs) [43], RNNs [17], LSTMs [20] and our BICORN-RNNs. It should be noted that the experimental results of type (1) and (2) were collected by [44,45], for an authoritative comparison. As for type (3), we use the default training and testing set splits provided by UCR for fairness concerning.…”
Section: Experiments Descriptionmentioning
confidence: 99%