2014
DOI: 10.1016/j.patcog.2014.01.015
|View full text |Cite
|
Sign up to set email alerts
|

The MinMax k-Means clustering algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
102
0
1

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 228 publications
(103 citation statements)
references
References 21 publications
0
102
0
1
Order By: Relevance
“…Tzortzis et al [5] have introduced a method that integrates MinMax and K-means algorithms, to assign weights to the clusters with respect to their variance and optimized a weighted version of the k-Means. Weights are trained iteratively along with the cluster assignments.…”
Section: B Centroid Based Clusteringmentioning
confidence: 99%
“…Tzortzis et al [5] have introduced a method that integrates MinMax and K-means algorithms, to assign weights to the clusters with respect to their variance and optimized a weighted version of the k-Means. Weights are trained iteratively along with the cluster assignments.…”
Section: B Centroid Based Clusteringmentioning
confidence: 99%
“…Although for different data, it can learn its own Mahalanobis distance, this comes at high computational cost. Meanwhile, covariance describes the linear relationship between two variables, it is not universally for all kinds of data [2,5].…”
Section: Introductionmentioning
confidence: 99%
“…Another important shortage that restricts the application of K-means is the initial cluster centroids and the number of clusters K must be decided before analysis [1][2][3][4][5][6][7][8]. The analysis results heavily depend on the initial positions of the cluster centers, thus if a bad initialization is chosen the objective function is easily getting trapped in poor local minima [7,8].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…it uses the distances between each data points as the weights to build Huffman tree, and then chooses the cluster center according to the value of k in the reverse order of Huffman tree .In literature [3][4] [5],in order to make the selected initial cluster centers far away from each other, the maximum distance method and the maximum and minimum distance method are utilized respectively. In literature [11], using this method effectively overcomes the issue of the algorithm being sensitive to the initial cluster centers, the method ensures the efficiency of the algorithm even in the situations where the center of initial cluster chosen randomly is bad. In literature [6] [7][12],combined with the method of minimum spanning tree, based on the distance between the data points, a minimum spanning tree is constructe.…”
Section: Introductionmentioning
confidence: 99%