2010
DOI: 10.4018/jdwm.2010100101
|View full text |Cite
|
Sign up to set email alerts
|

Asccn

Abstract: Special clustering algorithms are attractive for the task of grouping an arbitrary shaped database into several proper classes. Until now, a wide variety of clustering algorithms for this task have been proposed, although the majority of these algorithms are density-based. In this paper, the authors extend the dissimilarity measure to compatible measure and propose a new algorithm (ASCCN) based on the results. ASCCN is an unambiguous partition method that groups objects to compatible nucleoids, and merges thes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…The sampling approaches (Aggarwal et al, 2009;Cheng et al, 1998;Guha et al, 1998;Kranen et al, 2011;Lee et al, 2009;Ng et al, 2002;Pal et al, 2002;Sakai et al, 2009;Yildizli et al, 2011) usually choose the samples by a certain rule such as chisquare or divergence hypothesis (Hathaway et al, 2006). The incremental approaches (Bradley et al, 1998;Farnstrom et al, 2000;Gupta et al, 2004;Karkkainen et al, 2007;Luhr et al, 2009;Nguyen-Hoang et al, 2009;Ning et al, 2009;O'Callaghan et al, 2002;Ramakrishnan et al, 1996;Siddiqui et al, 2009;Wan et al, 2010Wan et al, , 2011 generally maintain past knowledge from the previous runs of a clustering algorithm to produce or improve the future clustering model. Nevertheless, as Hore et al (2007) pointed out, many existing algorithms for large and very large data sets are used for the crisp case, rarely for the fuzzy case.…”
Section: Introductionmentioning
confidence: 99%
“…The sampling approaches (Aggarwal et al, 2009;Cheng et al, 1998;Guha et al, 1998;Kranen et al, 2011;Lee et al, 2009;Ng et al, 2002;Pal et al, 2002;Sakai et al, 2009;Yildizli et al, 2011) usually choose the samples by a certain rule such as chisquare or divergence hypothesis (Hathaway et al, 2006). The incremental approaches (Bradley et al, 1998;Farnstrom et al, 2000;Gupta et al, 2004;Karkkainen et al, 2007;Luhr et al, 2009;Nguyen-Hoang et al, 2009;Ning et al, 2009;O'Callaghan et al, 2002;Ramakrishnan et al, 1996;Siddiqui et al, 2009;Wan et al, 2010Wan et al, , 2011 generally maintain past knowledge from the previous runs of a clustering algorithm to produce or improve the future clustering model. Nevertheless, as Hore et al (2007) pointed out, many existing algorithms for large and very large data sets are used for the crisp case, rarely for the fuzzy case.…”
Section: Introductionmentioning
confidence: 99%
“…It is an unsupervised technique without the knowledge what causes the grouping and how many groups exist (Song, Hu, & Yoo, 2009;Engle & Gangopadhyay, 2010;Silla & Freitas, 2011). The arbitrary shaped clustering was further treated (Wan, Wang, & Su, 2010). Clustering may be implemented on hierarchy, partition, density, grid, constraint, subspace and so on (Sander et al, 1998;Kwok et al, 2002;Grabmeier & Rudolph, 2002;Parsons, Haque, & Liu, 2004;Zhang et al, 2008;Horng et al, 2011).…”
Section: Introductionmentioning
confidence: 99%