The 16th International Conference on the Foundations of Digital Games (FDG) 2021 2021
DOI: 10.1145/3472538.3472595
|View full text |Cite
|
Sign up to set email alerts
|

Fine-tuning GPT-2 on annotated RPG quests for NPC dialogue generation

Abstract: GPT-2, a neural language model trained on a large dataset of English web text, has been used in a variety of natural language generation tasks because of the language quality and coherence of its outputs. In order to investigate the usability of GPT-2 for text generation for video games, we fine-tuned GPT-2 on a corpus of video game quests and used this model to generate dialogue lines for questgiver NPCs in a role-playing game. We show that the model learned the structure of quests and NPC dialogue, and inves… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(10 citation statements)
references
References 20 publications
1
9
0
Order By: Relevance
“…Our results suggest that even the largest variant of GPT-2, fine-tuned on our well-curated data set, cannot be used to autonomously generate high-quality quest descriptions reliably. This confirms findings in related work [22]. We especially found that Quest-GPT-2 lacks the ability (i) to distinguish between multiple entities, and (ii) to "glue" quest ingredients well together while not relaying illogical information.…”
Section: G Discussionsupporting
confidence: 91%
See 2 more Smart Citations
“…Our results suggest that even the largest variant of GPT-2, fine-tuned on our well-curated data set, cannot be used to autonomously generate high-quality quest descriptions reliably. This confirms findings in related work [22]. We especially found that Quest-GPT-2 lacks the ability (i) to distinguish between multiple entities, and (ii) to "glue" quest ingredients well together while not relaying illogical information.…”
Section: G Discussionsupporting
confidence: 91%
“…Based on a small user study with 75 participants, they found that the GPT-2 quests were experienced as more valuable and coherent, but less surprising and novel than quests produced by random assignment or Markov chains. Most closely related to our work, van Stegeren and Myśliwiec [22] have recently fine-tuned GPT-2 for the generation of quest descriptions told from the perspective of an NPC. Crucially though, they solely use data from the Massively Multiplayer Online Role-Playing Game (MMORPG) World of Warcraft [23].…”
mentioning
confidence: 86%
See 1 more Smart Citation
“…Furthermore, the generation of narratives, stories, and quests using a variety of techniques such as planning algorithms [47,61], grammars [7,27], or machine learning [53,58], is a growing and important field within games research and narrative research in general [18,22,32,62]. One typical approach for the generation of content and stories is the use of patterns representing different elements such as level design patterns [4,56], quest patterns and common quests in games [16,57], or identifying fundamental units and assembling them based on various pre-conditions [20,31].…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, the generation of narratives, stories, and quests using a variety of techniques such as planning algorithms [21]- [23], grammars [24], [25], or machine learning [26], [27], is a growing and important field within games research and narrative research in general [8], [20], [28], [29]. One typical approach for the generation of content and stories is the use of patterns representing different elements such as level design patterns [9], [30], quest patterns and common quests in games [25], [31]- [33], or identifying fundamental units and assembling them based on various pre-conditions [34], [35].…”
Section: Related Workmentioning
confidence: 99%