Proceedings of the 3rd Workshop on Neural Generation and Translation 2019
DOI: 10.18653/v1/d19-5605
|View full text |Cite
|
Sign up to set email alerts
|

Generating Diverse Story Continuations with Controllable Semantics

Abstract: We propose a simple and effective modeling framework for controlled generation of multiple, diverse outputs. We focus on the setting of generating the next sentence of a story given its context. As controllable dimensions, we consider several sentence attributes, including sentiment, length, predicates, frames, and automatically-induced clusters. Our empirical results demonstrate: (1) our framework is accurate in terms of generating outputs that match the target control values;(2) our model yields increased ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 35 publications
0
15
0
Order By: Relevance
“…Semantics-Based Generation has reemerged for use in various tasks such as paraphrasing , machine translation (Marcheggiani et al, 2018) and story generation (Tu et al, 2019;Fan et al, 2019). Semantic representations such as semantic frames and semantic role labels provide abstractions that capture the underlying meanings of different surface realizations (e.g., paraphrases, other languages).…”
Section: Related Workmentioning
confidence: 99%
“…Semantics-Based Generation has reemerged for use in various tasks such as paraphrasing , machine translation (Marcheggiani et al, 2018) and story generation (Tu et al, 2019;Fan et al, 2019). Semantic representations such as semantic frames and semantic role labels provide abstractions that capture the underlying meanings of different surface realizations (e.g., paraphrases, other languages).…”
Section: Related Workmentioning
confidence: 99%
“…Despite the good performance of these models, one of their widely acknowledged intrinsic drawbacks is the generation of safe and commonplace responses (Sordoni et al, 2015) due to improper objective function (Li et al, 2016), lack of model variability (Serban et al, 2017;Zhao et al, 2017), weak conditional signal (Tao et al, 2018), and model over-confidence (Jiang and de Rijke, 2018). Such tendency has prompted the study of methods that improve diversity and has resulted in a wide variety of solutions, such as optimizing a different loss function (Li et al, 2016;, varying the latent space (Shao et al, 2019;, utilizing adversarial learning (Xu et al, 2018;Shetty et al, 2017;Shi et al, 2018), and leveraging non-conversational information (Wu et al, 2020;Su et al, 2020;Tu et al, 2019). Our work is different from all above in that we adopt a pipeline model which promotes diversity by generating a variety of candidates.…”
Section: Conversational Language Generationmentioning
confidence: 99%
“…The options could also be a single query word at the beginning (Austin, 2019), the article title (Yan, 2016), politeness (Niu and Bansal, 2018) or specificity (See et al, 2019b) of the text, or the length of the generated sentence (Tu et al, 2019). However, the options cannot provide fine-grained control on topical directions of the generated contents.…”
Section: Related Workmentioning
confidence: 99%