2021
DOI: 10.48550/arxiv.2106.05241
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-Facet Clustering Variational Autoencoders

Abstract: Work in deep clustering focuses on finding a single partition of data. However, high-dimensional data, such as images, typically feature multiple interesting characteristics one could cluster over. For example, images of objects against a background could be clustered over the shape of the object and separately by the colour of the background. In this paper, we introduce Multi-Facet Clustering Variational Autoencoders (MFCVAE), a novel class of variational autoencoders with a hierarchy of latent variables, eac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…( 11 ), the distribution follows a Gaussian distribution . Assuming the separability of the biological latent space and the condition-specific biological latent space, our Multi-facet variational distribution [ 30 ] adopts a mean-field form: where J =2. For each j , the variational distribution takes the form of a multivariate Gaussian distribution featuring a diagonal covariance matrix, and its mean and variance are determined by the encoder , , and .…”
Section: Methodsmentioning
confidence: 99%
“…( 11 ), the distribution follows a Gaussian distribution . Assuming the separability of the biological latent space and the condition-specific biological latent space, our Multi-facet variational distribution [ 30 ] adopts a mean-field form: where J =2. For each j , the variational distribution takes the form of a multivariate Gaussian distribution featuring a diagonal covariance matrix, and its mean and variance are determined by the encoder , , and .…”
Section: Methodsmentioning
confidence: 99%
“…Second, we can speed up GLSL by using parallelization (e.g., by simultaneously evaluating multiple latent operators). Finally, we can investigate a variant of variational autoencoders where there is a latent superstructure with multiple categorical latent variables Falck et al, 2021] that is learned using GLSL. The winner in each row is highlighted in blue.…”
Section: Discussionmentioning
confidence: 99%
“…There is a growing interest in combining complex LVMs with unsupervised deep learning models Falck et al, 2021]. In this context, we can investigate a variant of variational autoencoders whose latent superstructure is learned using our method from Chapter 6.…”
Section: Future Workmentioning
confidence: 99%