2018
DOI: 10.48550/arxiv.1802.05983
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Disentangling by Factorising

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
243
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 131 publications
(247 citation statements)
references
References 20 publications
2
243
0
Order By: Relevance
“…Factor VAE metric [16], Mutual Information Gap (MIG) [17], Modularity [19], DCI Disentanglement [20] and SAP score [14], similar to [21]. We show the disentanglement performance of the proposed method with the following existing variants of VAEs: 1) β-VAE, 2) FactorVAE, 3) TC-VAE, 4)DIP-VAE-I, 5)DIP-VAE-II, 6) Bottleneck-VAE in Table 2.…”
Section: Resultsmentioning
confidence: 99%
“…Factor VAE metric [16], Mutual Information Gap (MIG) [17], Modularity [19], DCI Disentanglement [20] and SAP score [14], similar to [21]. We show the disentanglement performance of the proposed method with the following existing variants of VAEs: 1) β-VAE, 2) FactorVAE, 3) TC-VAE, 4)DIP-VAE-I, 5)DIP-VAE-II, 6) Bottleneck-VAE in Table 2.…”
Section: Resultsmentioning
confidence: 99%
“…In all single-stage methods, the disentanglement has improved due to the partial knowledge of label information through cross-entropy loss. Meanwhile, two-stage disentanglement methods, such as β-TCVAE [27], β-VAE [28], factor-VAE [29], and joint continuous-discrete factors VAE [30] methods create separate latent spaces without knowledge of class-label information. These methods mostly deal with autoencoder structures with a Gaussian prior distribution in the encoded space.…”
Section: A Disentangle Latent Space Clusteringmentioning
confidence: 99%
“…BetaVAE [14] uses a modified VAE objective that encourages the reduction of the KL-divergenece to an isotropic Gaussian prior to ensure disentanglement of all latent codes. FactorVAE [16] encourages factorization of the aggregate posterior by approximating the KL-Divergence using the cross-entropy loss of a classifier. Conditional VAE [27] improves the performance of VAE by conditioning the latent variable distribution over another variable.…”
Section: Related Workmentioning
confidence: 99%
“…We compare FRIED's performance with several baselines, namely, (i) FactorVAE [16], (ii) BetaVAE [14], (iii) CVAE [27], and (iv) Variational Fair AE (VFAE) [19]. The results for the UCI Adult and the dSprites datasets are shown in Fig.…”
Section: Fairness-accuracy Trade Offmentioning
confidence: 99%