2020 IEEE Conference on Games (CoG) 2020
DOI: 10.1109/cog47356.2020.9231548
|View full text |Cite
|
Sign up to set email alerts
|

Finding Game Levels with the Right Difficulty in a Few Trials through Intelligent Trial-and-Error

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 22 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…a reinforcement learning agent) needs to learn before being able to solve a level [18]. Different types of agents can even be used to approximate different levels of player skill [19].…”
Section: Discussion Future Work and Conclusionmentioning
confidence: 99%
“…a reinforcement learning agent) needs to learn before being able to solve a level [18]. Different types of agents can even be used to approximate different levels of player skill [19].…”
Section: Discussion Future Work and Conclusionmentioning
confidence: 99%
“…QD approaches such as MAP-Elites [7,33] search for solutions along a continuum of user-defined features, making them ideal for exploration. MAP-Elites has been used for design exploration in domains such as aerodynamics [15,16,17,23,24], and game design [1,5,19,20], but has been restricted to consideration of a single objective. MAP-Elites operates by first discretizing the feature space into bins, collectively known as a map or archive.…”
Section: Exploration and Optimization With Non-objective Criteriamentioning
confidence: 99%
“…In the context of games, the De-LeNox system [25] used novelty search with an autoencoder to generate 2D arcade-style spaceships while Schrum et al [38] used MAP-Elites for latent space illumination in a hybrid method for Mario level generation that combined GANs with compositional pattern producing networks (CPPNs). In another hybrid approach, Gonzalez et al [12] used MAP-Elites in conjunction with gameplaying agents and Bayesian optimization in a process called Intelligent Trial & Error to generate GVG-AI levels of appropriate difficulty.…”
Section: Related Workmentioning
confidence: 99%