2017
DOI: 10.1109/tsp.2016.2614491
|View full text |Cite
|
Sign up to set email alerts
|

Learning From Hidden Traits: Joint Factor Analysis and Latent Clustering

Abstract: Dimensionality reduction techniques play an essential role in data analytics, signal processing and machine learning. Dimensionality reduction is usually performed in a preprocessing stage that is separate from subsequent data analysis, such as clustering or classification. Finding reduced-dimension representations that are well-suited for the intended task is more appealing. This paper proposes a joint factor analysis and latent clustering framework, which aims at learning cluster-aware low-dimensional repres… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(22 citation statements)
references
References 52 publications
(102 reference statements)
0
22
0
Order By: Relevance
“…Values of m, the actual number of clusters, d, the dimension of the data, and N , the number of samples in the data set are provided in the close to 5 by K-MACE. The reason is the structure of the data for which the number of elements, n mj , of each cluster are [143,77,52,35,20,5,2,2]. As these numbers show, the last three clusters have very few elements that can't be detected as independent clusters by K-MACE methods.…”
Section: B Real Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Values of m, the actual number of clusters, d, the dimension of the data, and N , the number of samples in the data set are provided in the close to 5 by K-MACE. The reason is the structure of the data for which the number of elements, n mj , of each cluster are [143,77,52,35,20,5,2,2]. As these numbers show, the last three clusters have very few elements that can't be detected as independent clusters by K-MACE methods.…”
Section: B Real Datamentioning
confidence: 99%
“…Similar to other partitional approaches, K-means requires the correct number of clusters (CNC) to finalize the clustering procedure. In general, most clustering algorithms require CNC estimate as their input [4], [5], [6], [7]. However, in many practical applications this value is not available and CNC has to be estimated during the clustering procedure by using the same data that requires clustering.…”
Section: Introductionmentioning
confidence: 99%
“…The sample complexity proved in [21] is appealing, which is exactly the number of unknowns. The caveat is that the A (n) 's have to follow a certain continuous distribution-which means that some important types of tensors (e.g., tensors with discrete latent factors that have applications in machine learning [18,26,27]) may not be covered by the recoverability theorem in [21].…”
Section: Cp Tensorsmentioning
confidence: 99%
“…NLS3C [18] iteratively updates the sparse similarity matrix and clustering labels via an efficient manner. JNKM [19] jointly incorporates non-negative matrix factorization DR algorithm and K-means clustering algorithm. RCC [20] performs DR stage and clustering stage together by optimizing a continuous global objective.…”
Section: Introductionmentioning
confidence: 99%