2012 IEEE Conference on Computational Intelligence and Games (CIG) 2012
DOI: 10.1109/cig.2012.6374144
|View full text |Cite
|
Sign up to set email alerts
|

Learning to intercept opponents in first person shooter games

Abstract: Abstract-One important aspect of creating game bots is adversarial motion planning: identifying how to move to counter possible actions made by the adversary. In this paper, we examine the problem of opponent interception, in which the goal of the bot is to reliably apprehend the opponent. We present an algorithm for motion planning that couples planning and prediction to intercept an enemy on a partially-occluded Unreal Tournament map. Human players can exhibit considerable variability in their movement prefe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 17 publications
0
16
0
Order By: Relevance
“…In their model, under assuming that agents are rational, the most likely goal which drives observed behaviors is estimated. Tastan et al presented a framework to predict the positions of the opponent [28]. Unlike the work of Hladky, they used the learned MDP as the motion model of the PF.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In their model, under assuming that agents are rational, the most likely goal which drives observed behaviors is estimated. Tastan et al presented a framework to predict the positions of the opponent [28]. Unlike the work of Hladky, they used the learned MDP as the motion model of the PF.…”
Section: Related Workmentioning
confidence: 99%
“…Another way is approximate inference, and one of the most widely used algorithms is the PF. Besides the works in [8,22,28], Weber et al estimated the location of enemy units that have been encountered in Star Craft based on particle filter. In their work, each single particle, which consists of class, weight, and trajectory, corresponds to one previously encountered enemy unit [29].…”
Section: Related Workmentioning
confidence: 99%
“…The limitation of the research in [17] is that goal inference is not implemented in scenarios where results of actions are uncertain. Another related work on human behavior modeling under the MDP framework was done by Tastan et al [18]. By making use of the inverse reinforcement learning (IRL) and the PF, they learned the opponent's motion model and tracked it in the game Unreal 2004.…”
Section: Recognizing Goals Of a Single Agentmentioning
confidence: 99%
“…It was originally applied with success to robot manufacturing processes [1]. Preliminary work on imitation learning has been focused on the task of motion planning for artificial opponent in firstperson shooter games [2], but modelling game AI through imitation learning is seen to have great potential for more games than just first-person-shooters.…”
Section: Introductionmentioning
confidence: 99%
“…Using a 5×5 grid to represent the MDP state, the origin is located in the upper left corner and Mario is in the middle of the grid and occupies two cells (position[1,2] and[2,2]). Assuming there are 4 possible items: coin, enemy, platform and brick appearing in the grid, we would have 4 23 states overall.…”
mentioning
confidence: 99%