2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00095
|View full text |Cite
|
Sign up to set email alerts
|

Style and Content Disentanglement in Generative Adversarial Networks

Abstract: Disentangling factors of variation within data has become a very challenging problem for image generation tasks. Current frameworks for training a Generative Adversarial Network (GAN), learn to disentangle the representations of the data in an unsupervised fashion and capture the most significant factors of the data variations. However, these approaches ignore the principle of content and style disentanglement in image generation, which means their learned latent code may alter the content and style of the gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
42
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 65 publications
(42 citation statements)
references
References 33 publications
0
42
0
Order By: Relevance
“…Feature Disentanglement. In recent years, researches [5,7,11,13,14,19,29,31,32] used generative adversarial networks (GANs), which can be applied to style transfer task in some cases, to achieve image-to-image translation. A significant thought suitable for the style transfer task is that the style and content feature should be disentangled because of the domain deviation.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Feature Disentanglement. In recent years, researches [5,7,11,13,14,19,29,31,32] used generative adversarial networks (GANs), which can be applied to style transfer task in some cases, to achieve image-to-image translation. A significant thought suitable for the style transfer task is that the style and content feature should be disentangled because of the domain deviation.…”
Section: Related Workmentioning
confidence: 99%
“…Kotovenko et al [13] proposed a disentanglement loss to separate style and content. Kazemi et al [11] described a style and content disentangled GAN (SC-GAN) to learn a semantic representation of content and textual patterns of style. Yu et al [29] disentangled the input to latent code through an encoder-decoder network.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Lee et al [LTH * 18] divide the latent space to a content space and a domain-specific attribute space, which helps to keep the content unchanged when changing the image style. Kazemi et al [KIN19] also learn to disentangle the representations of style from content of the data. Since these methods only focus on a certain aspect of the representation, they generally cannot be used to transfer multiple specified attributes simultaneously.…”
Section: Disentangled Representationmentioning
confidence: 99%
“…The main goal of artificial intelligence (AI) is to approach complicated tasks with raw sensory inputs. In recent years, deep learning techniques have quickly become the state of the art in most of the AI realted applications, specially in computer vision [107,108,109,110,111,112,113,114,115].…”
Section: Introductionmentioning
confidence: 99%