2019
DOI: 10.1109/tg.2018.2835764
|View full text |Cite
|
Sign up to set email alerts
|

Emulating Human Play in a Leading Mobile Card Game

Abstract: Monte Carlo Tree Search (MCTS) has become a popular solution for game AI, capable of creating strong game playing opponents. However, the emergent playstyle of agents using MCTS is not necessarily human-like, believable or enjoyable. AI Factory Spades, currently the top rated Spades game in the Google Play store, uses a variant of MCTS to control AI allies and opponents. In collaboration with the developers, we showed in a previous study that the playstyle of human players significantly differed from that of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 21 publications
0
12
0
Order By: Relevance
“…In other domains, Richards and Amir [15] model the opponent's policy using a static evaluation technique and then perform inference on the opponent's remaining tiles given their most recent move in Scrabble. Baier et al [16] leverage policies trained from supervised human data to bias MCTS results; this is similar to our approach in that it uses human data to train an opponent model, but different because their model is not used to infer opponent hidden information. Sturtevant and Bowling [17] build a generalized model of the opponent from a set of candidate player strategies.…”
Section: A Related Workmentioning
confidence: 99%
“…In other domains, Richards and Amir [15] model the opponent's policy using a static evaluation technique and then perform inference on the opponent's remaining tiles given their most recent move in Scrabble. Baier et al [16] leverage policies trained from supervised human data to bias MCTS results; this is similar to our approach in that it uses human data to train an opponent model, but different because their model is not used to infer opponent hidden information. Sturtevant and Bowling [17] build a generalized model of the opponent from a set of candidate player strategies.…”
Section: A Related Workmentioning
confidence: 99%
“…Risk seeking bids when opponents are winning:. When the opponents can win the game on this round, BIS is modifying her bid in four cases: (1) opponents have bid nil, (2) high sum of bids, (3) opponents are winning by small points gap, or (4) opponents are winning by a medium points gap and BIS has a risky nil hand.…”
Section: G1 End-of-game Bidding Modificationsmentioning
confidence: 99%
“…Two groups have made intensive research in the specific area of Spades agents: a group from University of Alberta [19,20,21,22] and AI-Factory group [2,8,9,25]. The latter launched a commercial application called Spades Free.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…MCTS has also been applied to the game of 7 Wonders [39] and Ticket to Ride [40]. Furthermore, Baier et al biased MCTS with a player model, extracted from game-play data, to have an agent that was competitive while approximating human-like play [41]. Tesauro [42], on the other hand, used TD-Lambda to train Backgammon agents at a superhuman level.…”
Section: B Game-playing Ai Agentsmentioning
confidence: 99%