2020
DOI: 10.1609/aaai.v34i04.6146
|View full text |Cite
|
Sign up to set email alerts
|

Shared Generative Latent Representation Learning for Multi-View Clustering

Abstract: Clustering multi-view data has been a fundamental research topic in the computer vision community. It has been shown that a better accuracy can be achieved by integrating information of all the views than just using one view individually. However, the existing methods often struggle with the issues of dealing with the large-scale datasets and the poor performance in reconstructing samples. This paper proposes a novel multi-view clustering method by learning a shared generative latent representation that obeys … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(16 citation statements)
references
References 18 publications
0
16
0
Order By: Relevance
“…Yang et al [48] proposed graph embedding in a Gaussian mixture variational autoencoder. Although there are already some VAE-based multi-view or multi-modal learning methods, such as [8,20,42,50], our work is the first attempt to give a disentangled multi-view VAE framework in view-common and view-peculiar representation learning perspectives.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Yang et al [48] proposed graph embedding in a Gaussian mixture variational autoencoder. Although there are already some VAE-based multi-view or multi-modal learning methods, such as [8,20,42,50], our work is the first attempt to give a disentangled multi-view VAE framework in view-common and view-peculiar representation learning perspectives.…”
Section: Related Workmentioning
confidence: 99%
“…For many MVC methods, the central bottleneck is their high complexity that makes it unrealistic for handling large-scale data clustering tasks. Recent approaches have achieved inspirational progress by applying deep models [3,7,34,45,50,55]. However, most of them learn the clustering structure by exploring common representation or fusing features of all views.…”
Section: Introductionmentioning
confidence: 99%
“…Some multimodal learning approaches learn a common low-dimensional latent space from different modalities. For example, variational autoencoder (VAE) neural networks are used to learn a shared latent representation of multiple modalities (Yin et al, 2020). Several Bayesian inference methods use the ability of exploring the spatial and temporal structures of data to fuse the modalities into a joint representation (Huang and Kingsbury, 2013).…”
Section: Related Workmentioning
confidence: 99%
“…The essential idea is to find several low-dimensional representations embedded in latent spaces and finally attain a united representation for downstream clustering tasks [ 26 ]. Besides, aiming at finding a shared low-dimensional latent representation via matrix decomposition, the non-negative matrix factorization (NMF) [ 27 ]-based multi-view clustering methods [ 28 , 29 , 30 , 31 ] can also be seen as a branch of the subspace-based multi-view clustering method.…”
Section: Introductionmentioning
confidence: 99%