2007
DOI: 10.1109/tkde.2007.1048
|View full text |Cite
|
Sign up to set email alerts
|

An Entropy Weighting k-Means Algorithm for Subspace Clustering of High-Dimensional Sparse Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
312
0
8

Year Published

2009
2009
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 553 publications
(322 citation statements)
references
References 29 publications
2
312
0
8
Order By: Relevance
“…Thus, the total feature vector per chunk contains 16 * 2 * 12 = 384 features. Experimental results for FCM feature weighting-based VQ can be found in the literature [4], [5], [6], [7]. In this paper, results for FE feature weighting-based VQ are presented.…”
Section: B Speech Processingmentioning
confidence: 99%
See 3 more Smart Citations
“…Thus, the total feature vector per chunk contains 16 * 2 * 12 = 384 features. Experimental results for FCM feature weighting-based VQ can be found in the literature [4], [5], [6], [7]. In this paper, results for FE feature weighting-based VQ are presented.…”
Section: B Speech Processingmentioning
confidence: 99%
“…. , M [5], [6], [7]. Weight values were estimated using either F CM -based estimation technique [4], [5] or F E-based estimation technique [6].…”
Section: Fuzzy Feature Weightingmentioning
confidence: 99%
See 2 more Smart Citations
“…Clustering algorithms [27][28][29][30][31][32][33][34][35][36][37][38][39] can be applied to text mining to allow the automatic recognition of some sort of structure in the analyzed set of documents. In particular, clustering is designed to discover groups in the set of documents such that the documents within a group are more similar to one another than to documents of other groups.…”
Section: Text Clusteringmentioning
confidence: 99%