2021
DOI: 10.48550/arxiv.2110.02891
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Style Equalization: Unsupervised Learning of Controllable Generative Sequence Models

Abstract: Controllable generative sequence models with the capability to extract and replicate the style of specific examples enable many applications, including narrating audiobooks in different voices, auto-completing and auto-correcting written handwriting, and generating missing training samples for downstream recognition tasks. However, typical training algorithms for these controllable sequence generative models suffer from the training-inference mismatch, where the same sample is used as content and style input d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…Online handwriting synthesis models [2,12,13,14,15,16] are sequence-to-sequence models that take input text (a sequence of characters) and output handwriting (a sequence of strokes). To improve downstream recognizers, it is important for the synthesized handwriting to contain few artifacts and a wide range of handwriting styles -from highly legible, printed-style handwriting to less legible, cursive handwriting.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Online handwriting synthesis models [2,12,13,14,15,16] are sequence-to-sequence models that take input text (a sequence of characters) and output handwriting (a sequence of strokes). To improve downstream recognizers, it is important for the synthesized handwriting to contain few artifacts and a wide range of handwriting styles -from highly legible, printed-style handwriting to less legible, cursive handwriting.…”
Section: Related Workmentioning
confidence: 99%
“…We use the controllable generative model proposed in [2]. Given N real training samples, X = (x i , c i ) i=1...N , the generative model learns a distribution of handwriting samples, p(x|c, z), conditioned on the content c and the style z.…”
Section: Controllable Generative Modelmentioning
confidence: 99%
See 2 more Smart Citations