2020
DOI: 10.1016/j.neucom.2019.08.072
|View full text |Cite
|
Sign up to set email alerts
|

SSNet: Structure-Semantic Net for Chinese typography generation based on image translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 9 publications
0
9
0
Order By: Relevance
“…It can directly transfer the learned style to a new font instead of relearning the mapping function from the new certain source style to the target style. SSNet [14] held the point that current methods do not take the Chinese semantics and structure into consideration. Instead of directly generating font images, they employed the structure module and semantic module to require the font feature and character semantics, and then combined this information to generate the final target typography.…”
Section: Disentangled Representationsmentioning
confidence: 99%
See 1 more Smart Citation
“…It can directly transfer the learned style to a new font instead of relearning the mapping function from the new certain source style to the target style. SSNet [14] held the point that current methods do not take the Chinese semantics and structure into consideration. Instead of directly generating font images, they employed the structure module and semantic module to require the font feature and character semantics, and then combined this information to generate the final target typography.…”
Section: Disentangled Representationsmentioning
confidence: 99%
“…1) We propose a font fusion network to create new font images by fusing the disentangled skeleton shape and stroke style of different complex font images, while the existing approaches [11][12][13][14] focus on font image translation (imitation of existing font images). 2) Aiming at the problem that the existing method [10] cannot be used for the fusion of font images stably, we propose fuzzy supervised learning skill, which can stabilize the training process of GANs by designing fault tolerance factors.…”
Section: Introductionmentioning
confidence: 99%
“…The stylish Chinese font generation has attracted rising attention within recent years (Lin et al 2016;Cha et al 2020;Chang, Gu, and Zhang 2017;Tian 2017;Kong and Xu 2017;Jiang et al 2017Jiang et al , 2019Chang et al 2018;Chen et al 2019;Wu, Yang, and Hsu 2020;Gao and Wu 2020;Zhang et al 2020), since it has a wide range of applications including but not limited to the automatic generation of artistic Chinese calligraphy (Zhao et al 2020), art font design (Lin et al 2014) and personalized style generation of Chinese characters (Liu, Xu, and Lin 2012).…”
Section: Introductionmentioning
confidence: 99%
“…The stylish Chinese font generation has attracted rising attention within recent years (Lin et al 2016;Cha et al 2020;Chang, Gu, and Zhang 2017;Tian 2017;Kong and Xu 2017;Jiang et al 2017Jiang et al , 2019Chang et al 2018;Chen et al 2019;Wu, Yang, and Hsu 2020;Gao and Wu 2020;Zhang et al 2020), since it has a wide range of applications including but not limited to the automatic generation of artistic Chinese calligraphy (Zhao et al 2020), art font design (Lin et al 2014) and personalized style generation of Chinese characters (Liu, Xu, and Lin 2012).…”
Section: Introductionmentioning
confidence: 99%
“…The second category of Chinese font generation methods has been recently studied in (Tian 2017;Chang et al 2018;Chen et al 2019;Gao and Wu 2020;Wu, Yang, and Hsu 2020;Zhang et al 2020) with the development of deep learning (Goodfellow, Bengio, and Courville 2016), particularly the generative adversarial networks (GAN) (Goodfellow et al 2014). Due to the powerful expressivity and approximation ability of deep neural networks, feature extraction and generation procedures can be combined into one procedure, and thus, Chinese font generation methods in the second category can be usually realized in an end-to-end training way.…”
Section: Introductionmentioning
confidence: 99%