2016
DOI: 10.48550/arxiv.1612.05299
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey of Inductive Biases for Factorial Representation-Learning

Abstract: With the resurgence of interest in neural networks, representation learning has re-emerged as a central focus in artificial intelligence. Representation learning refers to the discovery of useful encodings of data that make domain-relevant information explicit. Factorial representations identify underlying independent causal factors of variation in data. A factorial representation is compact and faithful, makes the causal factors explicit, and facilitates human interpretation of data. Factorial representations… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
27
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(28 citation statements)
references
References 28 publications
(39 reference statements)
1
27
0
Order By: Relevance
“…Before analyzing metrics, we discuss what constitutes a disentangled representation. While there is no unanimously accepted definition of disentanglement, most agree on two main aspects [9,14,18,26,30,31]: First, the representation has to be distributed. This means that an input is a composition of explanatory factors and corresponds to a single point in the representation space.…”
Section: Properties Of a Disentangled Representationmentioning
confidence: 99%
See 1 more Smart Citation
“…Before analyzing metrics, we discuss what constitutes a disentangled representation. While there is no unanimously accepted definition of disentanglement, most agree on two main aspects [9,14,18,26,30,31]: First, the representation has to be distributed. This means that an input is a composition of explanatory factors and corresponds to a single point in the representation space.…”
Section: Properties Of a Disentangled Representationmentioning
confidence: 99%
“…Factors must be conceptually independent, but should also be statistically independent [2]. This condition is hard to satisfy in real-world data sets where certain factor realizations tend to co-occur more than others [14]. For example, in a data set of fruit images, we could be interested in two conceptually different factors: fruit type and color.…”
Section: Factor Independence In Representationmentioning
confidence: 99%
“…This motivates the learning of task-agnostic representations that fare well on a wide variety of problems. Recent research suggests that disentangled representations could be suitable in this sense [Bengio et al, 2013, Ridgeway, 2016, Tschannen et al, 2018.…”
Section: Introductionmentioning
confidence: 99%
“…We are interested in fleshing out a canonical reconstruction that is invariant under such group transformations while indepen-However, vanilla VAE models easily confound invariant representation with non-equivariant variational perturbations. Various papers have called attention to the inability of generative models like GAN (Goodfellow et al, 2014) and VAE (Kingma and Welling, 2014) in faithful factorization of latent representations (Ridgeway, 2016). Notably, the unsupervised InfoGAN (Chen et al, 2016), β-VAE (Higgins et al, 2017), β-TCVAE (Chen et al, 2018), factor-VAE (Kim and Mnih, 2018), and the semi-supervised DC-IGN (Kulkarni et al, 2015) are some examples of the necessary effort needed in learning factorized representations with modified generative models.…”
Section: Introductionmentioning
confidence: 99%