2019
DOI: 10.48550/arxiv.1910.05026
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Customizing Sequence Generation with Multi-Task Dynamical Systems

Alex Bird,
Christopher K. I. Williams

Abstract: Dynamical system models (including RNNs) often lack the ability to adapt the sequence generation or prediction to a given context, limiting their real-world application. In this paper we show that hierarchical multi-task dynamical systems (MTDSs) provide direct user control over sequence generation, via use of a latent code z that specifies the customization to the individual data sequence. This enables style transfer, interpolation and morphing within generated sequences. We show the MTDS can improve predicti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…Sharing parameters across all styles allows us to be more compact with our style representation, as style agnostic variation can be modelled with one shared set of parameters. Compared to other methods [6,38], our style representation is significantly smaller.…”
Section: Theoretical Analysismentioning
confidence: 95%
See 1 more Smart Citation
“…Sharing parameters across all styles allows us to be more compact with our style representation, as style agnostic variation can be modelled with one shared set of parameters. Compared to other methods [6,38], our style representation is significantly smaller.…”
Section: Theoretical Analysismentioning
confidence: 95%
“…Both FiLM and residual adaptation can be seen as a special case of the Multi-Task Dynamical Systems (MTDS) framework which models style by changing the parameters of neural network layers [6]. To see this, we write a feed-forward layer as 𝒉 (𝑖+1) = 𝜎(𝑾𝒉 (𝑖) + 𝒃), with 𝜎 a non-linearity, 𝒉 (𝑖) the hidden units for layer 𝑖, and 𝑾, 𝒃 the layer weights and biases.…”
Section: Theoretical Analysismentioning
confidence: 99%