2017
DOI: 10.48550/arxiv.1706.03319
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(18 citation statements)
references
References 7 publications
0
18
0
Order By: Relevance
“…[11] is applied on numerous tasks such as sketch to photo, image colorization, and aerial photo to map. Similarly, [28] use a conditional GAN with Residual UNet [22] framework supplemented with embeddings from VGG19 [24] for sketch colorization task. Both show that conditional GAN and UNet are effective when paired data are readily available or can be generated.…”
Section: Domain Transfer Through Generative Adversarial Networkmentioning
confidence: 99%
See 3 more Smart Citations
“…[11] is applied on numerous tasks such as sketch to photo, image colorization, and aerial photo to map. Similarly, [28] use a conditional GAN with Residual UNet [22] framework supplemented with embeddings from VGG19 [24] for sketch colorization task. Both show that conditional GAN and UNet are effective when paired data are readily available or can be generated.…”
Section: Domain Transfer Through Generative Adversarial Networkmentioning
confidence: 99%
“…For image translation task, some details and spatial information may be lost in the down-sampling process. UNet [22] is commonly used in image translation tasks [11,28] where details of the input image can be preserved in the output through the skip connection. We adapt such structure and our encoder mirrors the PGGAN generator structure -growing as the generator grows to a higher resolution.…”
Section: Architecturementioning
confidence: 99%
See 2 more Smart Citations
“…Specifically, a network takes a contour image drawn by the designers as input and then outputs the colorized icon image. Similar ideas have been adopted to colorize black-and-white Manga characters [5,8,14,42] and achieved great success. To control the colorization process, additional inputs, such as stroke colors and style images, are fed into the network as well.…”
Section: Introductionmentioning
confidence: 99%