Abstract-Perfect information Monte Carlo (PIMC) search is the method of choice for constructing strong AI systems for trick-taking card games. PIMC search evaluates moves in imperfect information games by repeatedly sampling worlds based on state inference and estimating move values by solving the corresponding perfect information scenarios. PIMC search performs well in trick-taking card games despite the fact that it suffers from the strategy fusion problem, whereby the game's information set structure is ignored because moves are evaluated opportunistically in each world. In this paper we describe imperfect information Monte Carlo (IIMC) search, which aims at mitigating this problem by basing move evaluation on more realistic playout sequences rather than perfect information move values. We show that RecPIMC -a recursive IIMC search variant based on perfect information evaluation -performs considerably better than PIMC search in a large class of synthetic imperfect information games and the popular card game of Skat, for which PIMC search is the state-of-the-art cardplay algorithm.
I. IIn recent years AI research in the area of imperfect information games has flourished. For instance, in 2008 Poker program "Polaris" defeated a group of six strong human players in two-player limit Texas hold'em in a duplicate match setting [1], and in 2009 "Kermit" reached expert playing strength in Skat, a popular trick-based card game similar to Bridge [2]. The considerable progress in Poker is due to new techniques such as counter factual regret minimization for approximating Nash equilibrium strategies in smaller, abstract versions of the game [3], whereas in Skat fast perfect information Monte Carlo (PIMC) search combined with explicit state inference and heuristic state evaluation elevated programs to the next level.Both approaches have distinct advantages and disadvantages. For instance, solving abstracted game versions off-line leads to fast on-line move computation because, essentially, available moves only have to be sampled from pre-computed probability distributions. However, finding good game abstractions that allow us to approximate move distributions in the original game well isn't trivial. A property of Poker is that the set of legal moves in each game state doesn't depend on the cards players are holding. Therefore, we only need to consider state abstractions (such as hand-strength bucketing) and we don't have to deal with move abstractions.By contrast, in trick-based card games such as Bridge, Spades, and Skat, legal moves are defined by the cards players hold. Therefore, abstracting such games with the intent of using pre-computed move probabilities is non-trivial. PIMC search deals with this problem by sampling game states in accordance with observed moves and private state information, revealing game states to all players, and then evaluating moves by using search algorithms tailored for the perfect information setting. The obvious drawback is that this method blatantly ignores players' ignorance and, conse...