2018
DOI: 10.1109/tkde.2018.2807444
|View full text |Cite
|
Sign up to set email alerts
|

CRAFTER: A Tree-Ensemble Clustering Algorithm for Static Datasets with Mixed Attributes and High Dimensionality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…Lin et al [101] present a tree-ensembles clustering algorithm, CRAFTER, for clustering high-dimensional mixed datasets. First, a random subset of data points is drawn and the random forests clustering algorithm [162] is applied.…”
Section: E Othermentioning
confidence: 99%
“…Lin et al [101] present a tree-ensembles clustering algorithm, CRAFTER, for clustering high-dimensional mixed datasets. First, a random subset of data points is drawn and the random forests clustering algorithm [162] is applied.…”
Section: E Othermentioning
confidence: 99%
“…Clustering is the method of categorising data into groups or clusters such that objects within a cluster have a high degree of similarity to one another but are quite different from objects in other clusters. The term cluster analysis itself encompasses a number of different algorithms and methods (Tree Clustering [Lin et al (2018), Liu et al (2005), Freeman (2006), Ahmed et al (2011), Lv et al (2018b), Freeman (2007), WANG et al (2009), Buttrey & Whitaker (2015), Qiu & Li (2021), Jothi et al (2015), Page (1974), Vathy-Fogarassy et al (2005), Miller & Rose (1994)], Block Clustering, k-Means Clustering Wilkin & Huang (2007) and EM algorithms) for grouping objects of similar kind into respective categories, graph-based clustering (Bai et al (2017)), hierarchical clustering (Köhn & Hubert (2014)), model-based clustering (Fraley & Raftery (1998), Fraley & Raftery (1999), Fraley & Raftery (2002)); Lloyd's K-means clustering and the progressive greedy K-means clustering (Wilkin & Huang (2007)).…”
Section: On Clustering and The Random Potts Modelsmentioning
confidence: 99%
“…The performance of KMCMD algorithm with initKmix algorithm was also compared with different state of the art initialization methods; Wu's initialization [41], Cao's initialization [14], Khan and Ahmad's initialization [28] and Ini Entropy [31]. KMCMD with initKmix algorithm is also compared with recently published CRAFTER [30] algorithm. The results of various clustering methods are presented in Table 4.…”
Section: Categorical Datasetsmentioning
confidence: 99%
“…The results of various clustering methods are presented in Table 4. The results for the other clustering algorithms were taken from the papers [28,30,31]. Except for Soybean-small dataset, KMCMD algorithm with initKmix outperformed other clustering methods.…”
Section: Categorical Datasetsmentioning
confidence: 99%