2022
DOI: 10.1109/tpami.2020.3047489
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Meaningful Clusters From High-Dimensional Data: A Strongly Consistent Sparse Center-Based Clustering Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 64 publications
0
7
0
Order By: Relevance
“…Sebastian et al [55] provided an interesting application of k-means clustering in the characterization of snore signals in the context of upperairway collapse. Chakraborty et al [56] proposed the Lass-weighted k-means algorithm specifically useful for a high-dimensional dataset, such as gene-expression data. Another interesting algorithm was developed by Gondeau et al [57], using object-weighting in kmeans clustering.…”
Section: Clustering Strategiesmentioning
confidence: 99%
“…Sebastian et al [55] provided an interesting application of k-means clustering in the characterization of snore signals in the context of upperairway collapse. Chakraborty et al [56] proposed the Lass-weighted k-means algorithm specifically useful for a high-dimensional dataset, such as gene-expression data. Another interesting algorithm was developed by Gondeau et al [57], using object-weighting in kmeans clustering.…”
Section: Clustering Strategiesmentioning
confidence: 99%
“…Then this transformation of the M matrix makes the expected value of m δ i and m δ j equal whenever objects i and j belong to the same cluster. Using the example considered in (6), we now illustrate Θ in the equation (8) below.…”
Section: Full Alignment Of G-vectorsmentioning
confidence: 99%
“…To detect clusters in high dimensional data, one class of clustering techniques (including, [2], [3], [4], [5], [6], [7], [8], [9]) assumes that information regarding meaningful clusters is contained in a small number of features. These algorithms, often referred to as sparse clustering procedures, rely on "relevant" feature extraction that influences resulting cluster partitions.…”
Section: Introductionmentioning
confidence: 99%
“…Lloyd's algorithm (Lloyd, 1982) is a popular coordinate descent algorithm to optimize (17). Despite its wide-spread application, k-means is notoriously unsuitable for highdimensional datasets, where only a handful of features are relevant in revealing the cluster structure of the dataset (Chakraborty and Das, 2020). To tackle this problem, researchers have often resorted to the concept of feature weighting (De Amorim, 2016).…”
Section: Application To Clusteringmentioning
confidence: 99%