2021
DOI: 10.1609/aaai.v26i1.8148
|View full text |Cite
|
Sign up to set email alerts
|

Alpha-Beta Pruning for Games with Simultaneous Moves

Abstract: Alpha-Beta pruning is one of the most powerful and fundamental MiniMax search improvements. It was designed for sequential two-player zero-sum perfect information games. In this paper we introduce an Alpha-Beta-like sound pruning method for the more general class of “stacked matrix games” that allow for simultaneous moves by both players. This is accomplished by maintaining upper and lower bounds for achievable payoffs in states with simultaneous actions and dominated action pruning based on the feasibility of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…This property together with simultaneous moves and fully observable state variables places combat games in the class of "stacked matrix games". Such games can -in principle -be solved by backward induction starting with terminal states via Nash equilibrium computations for instance by solving linear programs (Saffidine, Finnsson, and Buro 2012). However, Furtak and Buro (2010) showed that deciding which player survives in combat games in which units can't even move is PSPACE-hard in general.…”
Section: Solution Concepts For Combat Gamesmentioning
confidence: 99%
“…This property together with simultaneous moves and fully observable state variables places combat games in the class of "stacked matrix games". Such games can -in principle -be solved by backward induction starting with terminal states via Nash equilibrium computations for instance by solving linear programs (Saffidine, Finnsson, and Buro 2012). However, Furtak and Buro (2010) showed that deciding which player survives in combat games in which units can't even move is PSPACE-hard in general.…”
Section: Solution Concepts For Combat Gamesmentioning
confidence: 99%
“…While some of these problems have been addressed, like durative actions (Churchill, Saffidine, and Buro 2012) or simultaneous moves (Kovarsky and Buro 2005;Saffidine, Finnsson, and Buro 2012), the branching factor in RTS games is too large for current state-of-the-art techniques. To see why, we should distinguish what we call unitactions (actions that a unit executes) from player-actions.…”
Section: Real-time Strategy Gamesmentioning
confidence: 99%
“…Additionally, RTS games are simultaneous-action domains, where more than one player can issue actions at the same instant of time. Algorithms like minimax might result in under or overestimating the value of positions, and several solutions have been proposed (Kovarsky and Buro 2005;Saffidine, Finnsson, and Buro 2012). However, we noticed that this had a very small effect on the practical performance of our algorithm in RTS games, so we have not incorporated any of these techniques into NaïveMCTS.…”
Section: Naïve Monte Carlo Tree Search In Rts Gamesmentioning
confidence: 99%
“…To make game-tree search applicable at this level, they perform an abstraction of the game state representation grouping the units in groups but keeping information of each individual unit at the same time, and allowing only two types of actions per group: attack and merge with another group. Alpha-beta has been used in scenarios with simultaneous moves (Saffidine, Finnsson, and Buro 2012) and Churchill et al (Churchill, Saffidine, and Buro 2012) extended it with durative actions, being able to handle situations with up to eight versus eight units without using abstraction. An improvement of this work is presented in (Churchill and Buro 2013), where they defined scripts to improve the move-ordering; and experiments with UCT considering durations and a Portfolio Greedy Search; showing good results in larger combat scenarios than before.…”
Section: Introductionmentioning
confidence: 99%