2017
DOI: 10.1515/jisys-2015-0099
|View full text |Cite
|
Sign up to set email alerts
|

Clustering Using a Combination of Particle Swarm Optimization and K-means

Abstract: Clustering is an unsupervised kind of grouping of data points based on the similarity that exists between them. This paper applied a combination of particle swarm optimization and K-means for data clustering. The proposed approach tries to improve the performance of traditional partition clustering techniques such as K-means by avoiding the initial requirement of number of clusters or centroids for clustering. The proposed approach is evaluated using various primary and real-world datasets. Moreover, this pape… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
9
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 7 publications
0
9
0
2
Order By: Relevance
“…In addition, this paper also presents a comparison of the results produced by the proposed approach and with K-means based on the validity steps of the grouping such as inter and cluster distance, quantization error, silhouette index, and Dunn index. Comparison of results shows that as the size of the dataset increases, the proposed approach results in a significant increase in the K-means partition grouping technique [31].…”
Section: Related Workmentioning
confidence: 99%
“…In addition, this paper also presents a comparison of the results produced by the proposed approach and with K-means based on the validity steps of the grouping such as inter and cluster distance, quantization error, silhouette index, and Dunn index. Comparison of results shows that as the size of the dataset increases, the proposed approach results in a significant increase in the K-means partition grouping technique [31].…”
Section: Related Workmentioning
confidence: 99%
“…SVM has high recognition accuracy for single state equipment [21] and is not good at handling multi-state or continuously changing loads. The k-nearest neighbor method is mostly used to solve clustering problems, which is very sensitive to the selection of parameters, and each classification needs to calculate the distance between the unknown data and all training samples, which requires a large amount of calculation [22]. After comparison, the generalized regression neural network (GRNN) has a significant nonlinear mapping ability and strong approximation ability.…”
Section: Introductionmentioning
confidence: 99%
“…Larger values indicate a greater separation between clusters, meaning less overlap between the clusters in the model. The formula for inter-cluster distance is as follows [34]:…”
mentioning
confidence: 99%
“…Smaller values indicate more compact clusters and are therefore desired. The formula for intra-cluster distance is as follows [34]:…”
mentioning
confidence: 99%
See 1 more Smart Citation