2014
DOI: 10.1016/j.neucom.2013.12.027
|View full text |Cite
|
Sign up to set email alerts
|

Spectral clustering of high-dimensional data exploiting sparse representation vectors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
26
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 47 publications
(26 citation statements)
references
References 51 publications
0
26
0
Order By: Relevance
“…Common alternatives to subspace clustering include approximate expectation maximization (EM) [23], spectral clustering [89], shared-neighbor methods [46,91,93] and relevant set correlation [41,88], and clustering ensembles [31,32].…”
Section: Clustering Techniques For High-dimensional Datamentioning
confidence: 99%
“…Common alternatives to subspace clustering include approximate expectation maximization (EM) [23], spectral clustering [89], shared-neighbor methods [46,91,93] and relevant set correlation [41,88], and clustering ensembles [31,32].…”
Section: Clustering Techniques For High-dimensional Datamentioning
confidence: 99%
“…This algorithm is mainly used to compare the performance of feature selection. Test data sets are USPS, UMIST [19], COIL20, and Isolet [5], the data sets are described as shown in Table 1. In this paper, two evaluation metrics are used to evaluate the clustering performance of the various algorithms, they are the accuracy rate (AC),and the normalized mutual information value (NMI) [19].…”
Section: Experiments and Analysismentioning
confidence: 99%
“…Experiments show the superiority of the proposed algorithm. Sparse representation has been demonstrated to be effective in dealing with high-dimensional data, Wu et al [5] proposed a spectral algorithm with two weight matrix based on sparse representation vector. Experiments of real high dimensional data show that the performance of the algorithm is higher than the existing spectral clustering algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Wright et al [5] used individual sparse coefficient directly to build the affinity matrix for spectral clustering. However, Wu et al [6] proved that exploiting complete sparse representation vectors can reflect more truthful similarity among data objects, since more contextual information is taken into consideration. They assumed that the sparse representation vectors corresponding to two similar objects should be similar, since they can be reconstructed in a similar fashion using other data objects.…”
mentioning
confidence: 99%
“…To improve the performance of sparse representation based spectral clustering (SRSC) [6], we propose a novel clustering algorithm for high-dimensional data, which constructs affinity matrix via KNN based sparse representation coefficient vectors. We name our proposed algorithm as KNNSRSC.…”
mentioning
confidence: 99%