2019
DOI: 10.1109/jas.2019.1911744
|View full text |Cite
|
Sign up to set email alerts
|

Clustering structure analysis in time-series data with density-based clusterability measure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(20 citation statements)
references
References 49 publications
0
19
0
Order By: Relevance
“…As the distance metric for the K-means, we use the Mahalanobis distance; this distance has properties such as being invariant to scale by nonsingular linear transformations. An in-depth study of different metrics [36,37] will be a specific job to investigate whether it can improve the performance of the proposed methodology. (6) Label each pattern according to the group number in which each pattern (instance) was classified.…”
Section: Complexitymentioning
confidence: 99%
“…As the distance metric for the K-means, we use the Mahalanobis distance; this distance has properties such as being invariant to scale by nonsingular linear transformations. An in-depth study of different metrics [36,37] will be a specific job to investigate whether it can improve the performance of the proposed methodology. (6) Label each pattern according to the group number in which each pattern (instance) was classified.…”
Section: Complexitymentioning
confidence: 99%
“…Time-series data clustering algorithms (TSDCAs) cluster time-series data by minimizing the dissimilarity of time-series samples in the same cluster while maximizing the dissimilarity of different clusters [45][46][47]. Since the FCM, a TSDCA, has been shown to be effective in time-series data clusters [48][49][50], we employ the FCM to subclassify land cover classes.…”
Section: Class and Subclass Definitionmentioning
confidence: 99%
“…Equation (7) indicates that cosine similarity distance δ i can be obtained by calculating the minimum distance from the data point x i to any point with a density greater than that. After calculating the two parameters, a decision graph with ρ as the horizontal axis and δ as the vertical axis can be constructed.…”
Section: Definition 2 Local Density ρ I Based On Cosine Similarity (Gaussian Kernel)mentioning
confidence: 99%
“…Consequently, the conventional clustering method cannot process the data accurately. Some research [4][5][6][7][8][9] has proposed improvements on the text clustering algorithm, and some studies [10,11] have proposed improvements on the K-means algorithm. To apply the K-means, it is necessary to specify the number of clusters in advance and randomly select the initial clustering centers.…”
Section: Introductionmentioning
confidence: 99%