Prior work on controllable text generation usually assumes that the controlled attribute can take on one of a small set of values known a priori. In this work, we propose a novel task, where the syntax of a generated sentence is controlled rather by a sentential exemplar. To evaluate quantitatively with standard metrics, we create a novel dataset with human annotations. We also develop a variational model with a neural module specifically designed for capturing syntactic knowledge and several multitask training objectives to promote disentangled representation learning. Empirically, the proposed model is observed to achieve improvements over baselines and learn to capture desirable characteristics. 1 arXiv:1906.00565v1 [cs.CL] 3 Jun 2019Encoders. At test time, we want to have different combinations of semantic and syntactic inputs, which naturally suggests separate parameterizations for q φ (y|x) and q φ (z|x). Specifically, q φ (y|x) is parameterized by a word averaging encoder followed by a three-layer feedforward neural network since it has been observed that word averaging encoders perform surprisingly well for semantic tasks (Wieting et al., 2016). q φ (z|x) is parameterized by a bidirectional long short-term memory network (LSTM; Hochreiter and Schmidhuber, 1997) also followed by a three-layer feedforward neural network, where we concatenate the forward and backward vectors produced by the biLSTM and then take the average of these vectors.Decoders. As shown in Figure 3, at each time step, we concatenate the syntactic variable z with the previous word's embedding as the input to the