Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2009
DOI: 10.1145/1557019.1557118
|View full text |Cite
|
Sign up to set email alerts
|

Fast approximate spectral clustering

Abstract: Spectral clustering refers to a flexible class of clustering procedures that can produce high-quality clusterings on small data sets but which has limited applicability to large-scale problems due to its computational complexity of O(n 3 ), with n the number of data points. We extend the range of spectral clustering by developing a general framework for fast approximate spectral clustering in which a distortion-minimizing local transformation is first applied to the data. This framework is based on a theoretic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
292
0
2

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 407 publications
(294 citation statements)
references
References 37 publications
0
292
0
2
Order By: Relevance
“…However, these methods have limited scalability due to the required computational cost to obtain the eigenvectors of the data affinity matrix. Recent works have developed techniques to sparsify and simplify the procedure via the use of representative samples [17] or parallelization [18]. For this paper however, we simply apply the Ng-Jordan-Weiss (NJW) algorithm [19] in vanilla fashion and discuss its performance in Section 4.…”
Section: Graph Clustering Algorithms Agglomerative Hierarchical Clustmentioning
confidence: 99%
“…However, these methods have limited scalability due to the required computational cost to obtain the eigenvectors of the data affinity matrix. Recent works have developed techniques to sparsify and simplify the procedure via the use of representative samples [17] or parallelization [18]. For this paper however, we simply apply the Ng-Jordan-Weiss (NJW) algorithm [19] in vanilla fashion and discuss its performance in Section 4.…”
Section: Graph Clustering Algorithms Agglomerative Hierarchical Clustmentioning
confidence: 99%
“…In an opposite direction to our work, clustering can be approximated by finding representative objects, clustering them, and assigning the remaining objects to the clusters of their representatives. Yan et al [2] use k-means or RP trees to find representative points, Kaufman and Rousseeuw [3] k-medoids, and Ester et al [4] the most central object of a data page.…”
Section: Related Workmentioning
confidence: 99%
“…However, spectral clustering is known to suffer from a high computational cost associated with the n × n matrix W , especially when n is large. Consequently, there has been considerable effort to develop fast, approximate algorithms that can handle large data sets (Fowlkes et al, 2004;Yan et al, 2009;Sakai and Imiya, 2009;Wang et al, 2009;Chen and Cai, 2011;Wang et al, 2011;Tasdemir, 2012;Choromanska et al, 2013;Cai and Chen, 2015;Moazzen and Tasdemir, 2016;Chen, 2018). Interestingly, a considerable fraction of them use a landmark set to help reduce the computational complexity of spectral clustering.…”
Section: Introductionmentioning
confidence: 99%