Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/829
|View full text |Cite
|
Sign up to set email alerts
|

Controllable Neural Story Plot Generation via Reward Shaping

Abstract: Language-modeling-based approaches to story plot generation attempt to construct a plot by sampling from a language model (LM) to predict the next character, word, or sentence to add to the story. LM techniques lack the ability to receive guidance from the user to achieve a specific goal, resulting in stories that don't have a clear sense of progression and lack coherence. We present a reward-shaping technique that analyzes a story corpus and produces intermediate rewards that are backpropagated into a pre-tra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
84
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 90 publications
(86 citation statements)
references
References 15 publications
0
84
0
2
Order By: Relevance
“…It is, therefore, important, that this observation finds its place in automatic storytelling systems. Some attempts have been done in natural language generation towards controllable story generation Tambwekar et al, 2018). We propose that emotion expression should be one of the controllable parameters in automatic storytellers.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…It is, therefore, important, that this observation finds its place in automatic storytelling systems. Some attempts have been done in natural language generation towards controllable story generation Tambwekar et al, 2018). We propose that emotion expression should be one of the controllable parameters in automatic storytellers.…”
Section: Discussionmentioning
confidence: 99%
“…Our event-to-event system is the policy gradient deep reinforcement learner from Tambwekar et al (2019). Briefly, the technique starts with a sequence-to-sequence LSTM model trained to perform the event-to-event task.…”
Section: Event-to-event Implementationmentioning
confidence: 99%
See 2 more Smart Citations
“…These structures then need to be further processed to produce a text in natural language, serving either as textual description of the action or as characters' dialogue lines. This two-step generation process is common to very different approaches such as character-based planning [2], rule-based forward simulation of narrative acts [15], broad autonomous agents [6], and even machine learning based approaches [16]. Because the first step is the biggest challenge in the field of Interactive Narrative, the second step tends to be neglected.…”
Section: Introductionmentioning
confidence: 99%