International Conference on the Foundations of Digital Games 2020
DOI: 10.1145/3402942.3409604
|View full text |Cite
|
Sign up to set email alerts
|

Sequential Segment-based Level Generation and Blending using Variational Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
39
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 31 publications
(39 citation statements)
references
References 10 publications
0
39
0
Order By: Relevance
“…Some prior work has modeled Mega Man with machine learning without generating new Mega Man content [7,12]. Sarkar et al have modeled Mega Man levels along with levels from a large number of other games with Variational Autoencoders (VAEs) for the purpose of recombining this content to create entirely new types of content [14][15][16][17]33]. We instead focus on the problem of generating levels that resemble those from the original Mega Man.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Some prior work has modeled Mega Man with machine learning without generating new Mega Man content [7,12]. Sarkar et al have modeled Mega Man levels along with levels from a large number of other games with Variational Autoencoders (VAEs) for the purpose of recombining this content to create entirely new types of content [14][15][16][17]33]. We instead focus on the problem of generating levels that resemble those from the original Mega Man.…”
Section: Related Workmentioning
confidence: 99%
“…To the best of our knowledge, the only prior PCGML work that has incorporated ensemble learning has been work that employs a Random Forest (RF). However, all prior instances that have employed RFs have done so for a secondary classification task, and not for the primary generation task [5,14,17]. As such, our work stands out as the first PCGML approach to employ an ensemble of simple models for the primary generation task.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We employ a Random Forest (RF), as a low-data classifier to approximate a "human-like" fitness function. Random Forests have appeared in prior PCGML work [6,16,18]. The major difference is that in this prior work, the RF is trained once on existing data, while we iteratively train our RF on generated data to approximate our desired fitness function.…”
Section: Procedural Content Generation Via Machine Learningmentioning
confidence: 99%
“…While generating whole levels by stitching together successively generated segments [13] is acceptable for a game like Mario with uniform progression, this does not work for games that progress in multiple directions and orientations. This was addressed in [14] via an approach combining a VAE-based sequential model and a classifier to generate and then place a segment relative to the previous one, thus generating whole levels by an iterative loop of decoding and encoding successive segments. However this afforded control only via definition of the initial segment with the orientation of successive segments and properties of the blend not being controllable.…”
Section: Introductionmentioning
confidence: 99%