2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00452
|View full text |Cite
|
Sign up to set email alerts
|

Content and Style Disentanglement for Artistic Style Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
89
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 160 publications
(90 citation statements)
references
References 13 publications
1
89
0
Order By: Relevance
“…Feature Disentanglement. In recent years, researches [5,7,11,13,14,19,29,31,32] used generative adversarial networks (GANs), which can be applied to style transfer task in some cases, to achieve image-to-image translation. A significant thought suitable for the style transfer task is that the style and content feature should be disentangled because of the domain deviation.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Feature Disentanglement. In recent years, researches [5,7,11,13,14,19,29,31,32] used generative adversarial networks (GANs), which can be applied to style transfer task in some cases, to achieve image-to-image translation. A significant thought suitable for the style transfer task is that the style and content feature should be disentangled because of the domain deviation.…”
Section: Related Workmentioning
confidence: 99%
“…representation. Kotovenko et al [13] proposed a disentanglement loss to separate style and content. Kazemi et al [11] described a style and content disentangled GAN (SC-GAN) to learn a semantic representation of content and textual patterns of style.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Style-Aware AdaIN AAMS WCT Ours w/o ASM abstract concept, it is difficult to use quantitative metrics for comprehensive measurement. Based on this fact, we introduced two types of user studies, Style Deception Score, Semantic Retention Score, with reference to [20,21,32] to perceptually evaluate the effectiveness of our algorithm. Semantic Retention Ratio.…”
Section: Ours With Asm1mentioning
confidence: 99%