2021
DOI: 10.3390/electronics10040497
|View full text |Cite
|
Sign up to set email alerts
|

Colorization of Logo Sketch Based on Conditional Generative Adversarial Networks

Abstract: Logo design is a complex process for designers and color plays a very important role in logo design. The automatic colorization of logo sketch is of great value and full of challenges. In this paper, we propose a new logo design method based on Conditional Generative Adversarial Networks, which can output multiple colorful logos only by providing one logo sketch. We improve the traditional U-Net structure, adding channel attention and spatial attention in the process of skip-connection. In addition, the genera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…Early methods (Tian 2016(Tian , 2017Jiang et al 2017;Lyu et al 2017;Chang et al 2018;Sun, Zhang, and Yang 2018;Jiang et al 2019;Yang et al 2019a,b;Gao and Wu 2020;Wu, Yang, and Hsu 2020;Wen et al 2021;Hassan, Ahmed, and Choi 2021) utilize Image-to-Image translation networks (Zhang et al 2022) to achieve font generation by learning the mapping function between different fonts. Tian et al presents zi2zi (Tian 2017) which modifies pixel2pixel (Isola et al 2017) to make it suitable for font generation. AGEN (Lyu et al 2017) proposes a model for synthesizing Chinese calligraphy images with specified style from standard font images.…”
Section: Many-shot Font Generationmentioning
confidence: 99%
“…Early methods (Tian 2016(Tian , 2017Jiang et al 2017;Lyu et al 2017;Chang et al 2018;Sun, Zhang, and Yang 2018;Jiang et al 2019;Yang et al 2019a,b;Gao and Wu 2020;Wu, Yang, and Hsu 2020;Wen et al 2021;Hassan, Ahmed, and Choi 2021) utilize Image-to-Image translation networks (Zhang et al 2022) to achieve font generation by learning the mapping function between different fonts. Tian et al presents zi2zi (Tian 2017) which modifies pixel2pixel (Isola et al 2017) to make it suitable for font generation. AGEN (Lyu et al 2017) proposes a model for synthesizing Chinese calligraphy images with specified style from standard font images.…”
Section: Many-shot Font Generationmentioning
confidence: 99%
“…In the domain of image style transfer, there are two types of data inputs: paired character images, such as in pix2pix [6] and zi2zi [7], and unpaired character images, such as in CycleGAN [9] and CS-GAN [12].…”
Section: Related Workmentioning
confidence: 99%
“…However, this approach requires a significant amount of human intervention in generating Chinese characters and can often be complicated to implement. The second approach involves using image style transfer techniques to convert Chinese characters from one style to another [6,7,8,9]. This approach is fully automatic and requires no human intervention, making it relatively simple and effective.The existing methods such as pix2pix and zi2zi can generate fonts, although some of the fonts may be blurry.…”
Section: Introductionmentioning
confidence: 99%
“…Referring to image-to-image translation methods [2] between different domains, the HCCAG task is also regarded as the image-to-image style translation problem. In these related works, different font styles, such as DFKai-SB script 1 , running script, DFKai-SB and Pen-Kai scripts 2 , SIM-Kai script 3 and Lanting script 4 , are regarded as different data domains. Zi2zi [3] was the first work to use GAN to generate Chinese characters but required pairwise data as the input.…”
Section: Introductionmentioning
confidence: 99%