2019
DOI: 10.1109/access.2019.2918772
|View full text |Cite
|
Sign up to set email alerts
|

HCFS: A Density Peak Based Clustering Algorithm Employing A Hierarchical Strategy

Abstract: Clustering, which explores the visualization and distribution of data, has recently been widely studied. Although current clustering algorithms such as DBSCAN, can detect the arbitrary-shape clusters and work well, the parameters involved in these methods are often difficult to determine. Clustering using a fast search of density peaks is a promising technique for solving this problem. However, the current methods suffer from the problem of uneven distribution within local clusters. To solve this problem, we p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…This paper builds several sets of experiments, and the results indicate that the density peak based algorithm (CFS) [26] has higher performance than the spectral clustering algorithm, although CFS has no rules to avoid the difficulties caused by selecting the ''qualified'' center point, and it doesn't take into account the non-uniform distribution in the data set. Zhuo et al proposed a method that can solve this problem [5], excluding it is only applicable to low-dimensional data sets. However, the non-uniform distribution within the cluster will also be common in highdimensional data sets.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…This paper builds several sets of experiments, and the results indicate that the density peak based algorithm (CFS) [26] has higher performance than the spectral clustering algorithm, although CFS has no rules to avoid the difficulties caused by selecting the ''qualified'' center point, and it doesn't take into account the non-uniform distribution in the data set. Zhuo et al proposed a method that can solve this problem [5], excluding it is only applicable to low-dimensional data sets. However, the non-uniform distribution within the cluster will also be common in highdimensional data sets.…”
Section: Related Workmentioning
confidence: 99%
“…Obviously, it could be seen that the time cost in first step of the experiment is much smaller than the second and third steps, which could be almost ignored. Furthermore, in the second step, the complexity of constructing distance matrix is N * (N − 1) * k, where N is the number of data points and k is the number of features, while the third step could be drawn from [5].…”
Section: Time Complexity Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…The dataset is assigned to the users on the experience and the priority of cut-off distance. The DPC assignment will be assigning the centre points by the local density of neighbour cluster it will be done in the error propagation of the label cluster [4]. The clustering and recombination levels of the network are improved taking into account factors including the remaining energy of the nodes and the distance between the nodes [5].…”
Section: Introductionmentioning
confidence: 99%
“…Many researchers have addressed the problem that the DPC has difficulties dealing with complex structured datasets. Zhuo [21] confronted the uneven distribution within local clusters and proposed a density peaks clustering algorithm employing a hierarchical strategy (HCFS). The HCFS used a new mechanism to measure the similarity and connectivity of subclusters, which combined highly similar and interconnected subclusters into a cluster.…”
Section: Introductionmentioning
confidence: 99%