2019
DOI: 10.1145/3355089.3356574
|View full text |Cite
|
Sign up to set email alerts
|

Artistic glyph image synthesis via one-stage few-shot learning

Abstract: Fig. 1. (a) An overview of our method, given a few reference samples (5 for English or 30 for Chinese), glyph images of all other characters in the same style can be synthesized. (b) Examples of synthesized English/Chinese glyph images obtained by our proposed AGIS-Net, MC-GAN [Azadi et al. 2018] and TET-GAN [Yang et al. 2019], respectively, please zoom in for better inspection.Automatic generation of artistic glyph images is a challenging task that attracts many research interests. Previous methods either are… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
79
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 84 publications
(79 citation statements)
references
References 24 publications
0
79
0
Order By: Relevance
“…Recently, lots of CNN‐based models for offline Chinese glyph synthesis have emerged [RMC15, SRL*17, ZPIE17, Tia17, ZZC18, GGL*19, WGL20]. However, these methods fail to reflect the process of human writing, and cannot handle scribbled handwritings (see Figure 1).…”
Section: Related Workmentioning
confidence: 99%
“…Recently, lots of CNN‐based models for offline Chinese glyph synthesis have emerged [RMC15, SRL*17, ZPIE17, Tia17, ZZC18, GGL*19, WGL20]. However, these methods fail to reflect the process of human writing, and cannot handle scribbled handwritings (see Figure 1).…”
Section: Related Workmentioning
confidence: 99%
“…Multi-ContentGAN [3] is the first method for the English artistic font generation, which cannot handle the Chinese characters. Both TET-GAN [35] and AGIS-Net [6] are proposed for the Chinese artistic font generation. Existing text effects transfer methods focus on the transfer of texture styles, and cannot cope with the task of font generation for geometric style transfer.…”
Section: Font Generationmentioning
confidence: 99%
“…MC-GAN [1] presents a stacked conditional GAN (cGAN) architecture to predict the coarse glyph shapes, and an ornamentation network to predict color and texture of the final glyphs. Unlike two-stage MC-GAN framework, AGIS-Net [8] transfers both shape and texture styles in one-stage with only a few stylized samples in order to improve the computational efficiency. FontRNN [24] treats Chinese characters as sequences of points (writing trajectories) and proposes to handle the font generation task via a Recurrent Neural Network (RNN) model.…”
Section: Data-driven Font Generationmentioning
confidence: 99%
“…Recently, data-driven approaches have been proposed to robustly investigate the correlations between different objects / instances without relying on hard-coded metrics. Along this direction, there is an emerging trend to automatically generate different styles of font / glyph by deep neural network (DNN), such as DNN-based method [3], modified variational autoencoder (VAE) method [26], image-to-image translation model [18], zi2zi [25], DCFont [12], Hierarchical Adversarial Network (HAN) [7], CycleGAN-based model [6], SCFont [13], PEGAN [22], MC-GAN [1], AGIS-Net [8], FontRNN [24], convolutional recurrent generative model [4], Glyph-GAN [10], FontGAN [16], etc. However, most of the above methods are not able to automatically generate large-scale fonts with highquality and high-consistency in geometry and content / style for characters and words / paragraphs from a few given samples.…”
Section: Introductionmentioning
confidence: 99%