2004
DOI: 10.1007/978-3-540-24741-8_8
|View full text |Cite
|
Sign up to set email alerts
|

Iterative Incremental Clustering of Time Series

Abstract: Abstract. We present a novel anytime version of partitional clustering algorithm, such as k-Means and EM, for time series. The algorithm works by leveraging off the multi-resolution property of wavelets. The dilemma of choosing the initial centers is mitigated by initializing the centers at each approximation level, using the final centers returned by the coarser representations. In addition to casting the clustering algorithms as anytime algorithms, this approach has two other very desirable properties. By wo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
82
0
3

Year Published

2005
2005
2017
2017

Publication Types

Select...
5
3
2

Relationship

4
6

Authors

Journals

citations
Cited by 135 publications
(85 citation statements)
references
References 17 publications
0
82
0
3
Order By: Relevance
“…This disqualified them because we set the proviso that the results should be independent from the initial conditions -our method of axial K-means (Lelu, 1994) is part of this family; the local optima for a given number of clusters mostly reveal the same main clusters which are often trivial but can also make the most interesting ones of average or low size appear/disappear/amalgamate or split. Quite a lot of incremental variants of these methods have been proposed (Binztock and Gallinari, 2002;Chen et al, 2003) and a partial review of these can be found in Lin et al (2004). Many come from the DARPA-TDT research programme such as Gaudin and Nicoloyannis (2005), Gaber et al (2005).…”
Section: Adapting Methods With Mobile Centres To Incrementalitymentioning
confidence: 99%
“…This disqualified them because we set the proviso that the results should be independent from the initial conditions -our method of axial K-means (Lelu, 1994) is part of this family; the local optima for a given number of clusters mostly reveal the same main clusters which are often trivial but can also make the most interesting ones of average or low size appear/disappear/amalgamate or split. Quite a lot of incremental variants of these methods have been proposed (Binztock and Gallinari, 2002;Chen et al, 2003) and a partial review of these can be found in Lin et al (2004). Many come from the DARPA-TDT research programme such as Gaudin and Nicoloyannis (2005), Gaber et al (2005).…”
Section: Adapting Methods With Mobile Centres To Incrementalitymentioning
confidence: 99%
“…The most commonly used data mining clustering algorithm is k-means [2,22,21]. We performed k-means using the Euclidean distance on the raw data, and on our bag-of-patterns representation.…”
Section: Partitional Clusteringmentioning
confidence: 99%
“…DTW has been extended to deal with unknown start and end points of isolated words in speech [4], [5], and connected word recognition [6], [7], [8]. More recent research on DTW has focused on applying it to mining patterns from one-dimensional time series [9], and indexing and clustering one-dimensional time series [10], [11].…”
Section: Dynamic Time Warpingmentioning
confidence: 99%