2010
DOI: 10.1016/j.neucom.2009.12.029
|View full text |Cite
|
Sign up to set email alerts
|

Spectral clustering with eigenvector selection based on entropy ranking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 54 publications
(49 citation statements)
references
References 18 publications
0
49
0
Order By: Relevance
“…Shi's fundamental principle [27] was that there are eigenvectors for the data on each cluster that describe them as a large amount and data on other clusters are represented as values approaching zero. In another methodology in 2010, Zhao [28] computed the power of data separation in eigenvectors by the entropy to prove the importance of each feature. Another approach was proposed by Ashkezari in 2011 [38], in which the data are mapped into a nonlinear space by KPCA to extract the nonlinear features.…”
Section: Previous Work On Selecting Eigenvectorsmentioning
confidence: 99%
See 1 more Smart Citation
“…Shi's fundamental principle [27] was that there are eigenvectors for the data on each cluster that describe them as a large amount and data on other clusters are represented as values approaching zero. In another methodology in 2010, Zhao [28] computed the power of data separation in eigenvectors by the entropy to prove the importance of each feature. Another approach was proposed by Ashkezari in 2011 [38], in which the data are mapped into a nonlinear space by KPCA to extract the nonlinear features.…”
Section: Previous Work On Selecting Eigenvectorsmentioning
confidence: 99%
“…In 2008, Jiang [26] proposed a method for selecting better eigenvectors for the first time. In recent years, many attempts have been made to weight, rank and select appropriate eigenvectors that contain more information [26][27][28][29]. In this paper, a method based on selecting a combination of eigenvectors that lead to the best clustering has been presented.…”
Section: Introductionmentioning
confidence: 99%
“…But when the number of clusters is not given, rounding becomes more difficult. To tackle this problem, some methods use Gaussian mixture models to determine the number of clusters and to partition the data [167,180]. This is done after selecting the relevant eigenvectors using heuristics.…”
Section: Spectral Clusteringmentioning
confidence: 99%
“…The k with the lowest cost is chosen as an estimate for the number of clusters. Xiang and Gong [167] and Zhao et al [180] question the assumption that clustering should be based on all the eigenvectors from a continuous block at the beginning of the eigenvector spectrum. They use heuristics to choose a collection of eigenvectors which do not necessarily form a continuous block, and then use Gaussian mixture models to determine the number of clusters and to partition the data points.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation