2012
DOI: 10.7753/ijcatr0201.1001
|View full text |Cite
|
Sign up to set email alerts
|

New Approach for K-mean and K-medoids Algorithm

Abstract: K-means and K-medoids clustering algorithms are widely used for many practical applications. Original k-mean and kmedoids algorithms select initial centroids and medoids randomly that affect the quality of the resulting clusters and sometimes it generates unstable and empty clusters which are meaningless. The original k-means and k-mediods algorithm is computationally expensive and requires time proportional to the product of the number of data items, number of clusters and the number of iterations. The new ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0
2

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(20 citation statements)
references
References 3 publications
0
18
0
2
Order By: Relevance
“…Because children were measured based on performance scores on LITMUS-SRT and NWRT designed to identify SLI without penalizing bilinguals, our premise was that SLI-cases would be similar to each other, and hence group together, while TD-cases would form their own cluster regardless of bilingualism. Different from Hamann and Abed Ibrahim (2017), we chose the PAM (Partitioning Around Medoids) non-hierarchical k-medoid clustering method (Kaufman and Rousseeuw, 1987, 2009) over k-means, because it is a suitable method for small datasets with up to approximately 60 objects, and because it can handle noisy data and outliers (Kaufman and Rousseeuw, 1987, 2009; Kashef and Kamel, 2008; Patel and Singh, 2013; Soni and Patel, 2017). Variables were scaled for normalization purposes in the course of the PAM-analysis.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Because children were measured based on performance scores on LITMUS-SRT and NWRT designed to identify SLI without penalizing bilinguals, our premise was that SLI-cases would be similar to each other, and hence group together, while TD-cases would form their own cluster regardless of bilingualism. Different from Hamann and Abed Ibrahim (2017), we chose the PAM (Partitioning Around Medoids) non-hierarchical k-medoid clustering method (Kaufman and Rousseeuw, 1987, 2009) over k-means, because it is a suitable method for small datasets with up to approximately 60 objects, and because it can handle noisy data and outliers (Kaufman and Rousseeuw, 1987, 2009; Kashef and Kamel, 2008; Patel and Singh, 2013; Soni and Patel, 2017). Variables were scaled for normalization purposes in the course of the PAM-analysis.…”
Section: Methodsmentioning
confidence: 99%
“…First, in the so-called “Build-step,” the k-medoid algorithm selects k medoids randomly, with k being the optimal number of clusters. Next, a matrix of dissimilarity is calculated from the raw data and the algorithm assigns every object to either of the k clusters based on their distance to the nearest medoid (Patel and Singh, 2013). The sum of absolute error in the clustering procedure is equal to the sum of the distances between data points and their medoids.…”
Section: Methodsmentioning
confidence: 99%
“…To solve this problem, the K-means algorithm repeats two procedures by fixing either w ij or b j [8]. The Algorithm 1 consists of the following steps [61]: Make k clusters by assigning data points to the closest centroid Recalculate the centroid of each cluster by the mean of the data in the cluster UNTIL the centroids do not change K-means clustering has been extensively applied to group reservoir models in petroleum engineering [13,16,45,48]. Since reservoir models usually have high dimensionality, some researchers have attempted to perform the algorithm on the featured plane using feature extraction methods, such as PCA and singular value decomposition [5,24,25].…”
Section: K-means Clusteringmentioning
confidence: 99%
“…Even if the K-means algorithm is more popular in petroleum engineering research area due to its calculation efficiency, K-medoids algorithm has actively been employed for grouping models [43,64]. The detailed steps of the Algorithm 2 are listed as follows [61]:…”
Section: K-medoids Clusteringmentioning
confidence: 99%
“…The k-means method is based on the centroid techniques to represent the cluster and it is sensitive to outliers. This means, a data object with an extremely large value may disrupt the distribution of data [6].…”
Section: K-medoidmentioning
confidence: 99%