2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00069
|View full text |Cite
|
Sign up to set email alerts
|

Learning Disentangled Representations via Independent Subspaces

Abstract: Image generating neural networks are mostly viewed as black boxes, where any change in the input can have a number of globally effective changes on the output. In this work, we propose a method for learning disentangled representations to allow for localized image manipulations. We use face images as our example of choice. Depending on the image region, identity and other facial attributes can be modified. The proposed network can transfer parts of a face such as shape and color of eyes, hair, mouth, etc. dire… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 41 publications
(66 reference statements)
0
6
0
Order By: Relevance
“…Greff et al [30] and Yang et al [31] assigned groups of latent variables corresponding to the objects in an image on the basis of instance segmentation. Awisus et al [7] assigned facial parts (e.g., mouth, nose and hair) to groups of latent variables for face image data.…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…Greff et al [30] and Yang et al [31] assigned groups of latent variables corresponding to the objects in an image on the basis of instance segmentation. Awisus et al [7] assigned facial parts (e.g., mouth, nose and hair) to groups of latent variables for face image data.…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…A popular framework for unsupervised representation learning is a deep generative model, which aims to generate high-dimensional images from low-dimensional latent variables [1], [3]- [6]. Disentangled representation learning (DRL) aims at separating the representation of latent variables into disjoint parts corresponding to semantically meaningful features [1], [2], [4], [6], [7]. Disentangled representations can be beneficial for various tasks of computer vision such as controllable image generation [8]- [11], person identification [12], [13] and robust adversarial training [14].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are other variants of Autoencoders which enforce a specific distribution in the latent space, either by a variational approach [12] or by applying a discriminator network on the latent space known as Adversarial Autoencoders [20]. Other works focussed on getting disentangled representations of data in the latent space [14,7,10,1]. There are several other variants that find additional constraints on the latent variables, mostly for specific applications [22,6,25,18,4,17,5].…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…Over the past two decades, it has become a fundamental tool in independent subspace analysis (ISA) (e.g., [7,29]). ISA has found many applications in machine learning tasks, e.g., subspace clustering [33,27,32], face recognition/verification [21,20,19,4], learning of disentangled representations [2,26], etc. In this paper, we consider the identification problem for a blind jbdp.…”
Section: Introductionmentioning
confidence: 99%