Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/335
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Disentangled Representation Learning with Analogical Relations

Abstract: Learning the disentangled representation of interpretable generative factors of data is one of the foundations to allow artificial intelligence to think like people. In this paper, we propose the analogical training strategy for the unsupervised disentangled representation learning in generative models. The analogy is one of the typical cognitive processes, and our proposed strategy is based on the observation that sample pairs in which one is different from the other in one specific generative factor show the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 1 publication
0
14
0
Order By: Relevance
“…Particularly, we compare how well the methods capture the attributes in CelebA dataset by examining the maximum correlations and the prediction performances in canonical correlation analysis. We also compare the methods along subspace score (Li, Tang, and He 2018), an unsupervised disentanglement metric. Furthermore, we display rerendered sample sequences in the latent traversal as appropriate.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Particularly, we compare how well the methods capture the attributes in CelebA dataset by examining the maximum correlations and the prediction performances in canonical correlation analysis. We also compare the methods along subspace score (Li, Tang, and He 2018), an unsupervised disentanglement metric. Furthermore, we display rerendered sample sequences in the latent traversal as appropriate.…”
Section: Methodsmentioning
confidence: 99%
“…These are methods based on the mutual independence assumption. We also include comparisons with AnaVAE (Li, Tang, and He 2018) and InfoGAN (Chen et al 2016). VAE (Kingma and Welling 2014) is also compared as a baseline.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations