2020 Third International Conference on Artificial Intelligence for Industries (AI4I) 2020
DOI: 10.1109/ai4i49448.2020.00012
|View full text |Cite
|
Sign up to set email alerts
|

The Go Transformer: Natural Language Modeling for Game Play

Abstract: This work applies natural language modeling to generate plausible strategic moves in the ancient game of Go. We train the Generative Pretrained Transformer (GPT-2) to mimic the style of Go champions as archived in Smart Game Format (SGF), which offers a text description of move sequences. The trained model further generates valid but previously unseen strategies for Go. Because GPT-2 preserves punctuation and spacing, the raw output of the text generator provides inputs to game visualization and creative patte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Depending on the game aim, this involves (1) translating the current situation with regards to the expected outcome into language; (2) generating content using a large language model; and (3) translating the output back in order to implement it in a game system. For example, Ciolino et al [43] used a fine-tuned GPT-2 model to generate Go moves by translating the current board state to text and the language output back to action suggestions. Such a process is naturally easier for purely text-based tasks, such as dialog generation, where text is already the expected output, and the expected output can comparatively easily be described in language [39,41].…”
Section: Language Model Approachmentioning
confidence: 99%
“…Depending on the game aim, this involves (1) translating the current situation with regards to the expected outcome into language; (2) generating content using a large language model; and (3) translating the output back in order to implement it in a game system. For example, Ciolino et al [43] used a fine-tuned GPT-2 model to generate Go moves by translating the current board state to text and the language output back to action suggestions. Such a process is naturally easier for purely text-based tasks, such as dialog generation, where text is already the expected output, and the expected output can comparatively easily be described in language [39,41].…”
Section: Language Model Approachmentioning
confidence: 99%
“…Another potential search method is global search. Ciolino et al used a general language model similar to GPT2 to model the chess game strategy, which can obtain a favorable opening action sequence [58]. This modeling method provides a new global search method for many board games, providing historical text annotations as training data.…”
Section: Search Strategymentioning
confidence: 99%
“…One such example is Kaplan, Sauer, and Sosa (2017), where natural language methods were used with deep reinforcement learning to play Atari games. In addition to this, recent work on using transformer models with reinforcement learning includes: Parisotto et al (2020), Noever, Ciolino, and Kalin (2020) train GPT-2 (Radford et al 2019) on the PGN format to learn chess, Ciolino, Kalin, and Noever (2020) trained GPT-2 in a similar way to learn Go, and Stein, Filchenkov, and Asadulaev (2020) used Transformers for Deep Q-learning to play Atari games. Krause et al (2020) introduced a coding scheme to improve small language models.…”
Section: Related Workmentioning
confidence: 99%