How do people reason about their opponent in turn-taking games? Often, people do not make the decisions that game theory would prescribe. We present a logic that can play a key role in understanding how people make their decisions, by delineating all plausible reasoning strategies in a systematic manner. This in turn makes it possible to construct a corresponding set of computational models in a cognitive architecture. These models can be run and fitted to the participants' data in terms of decisions, response times, and answers to questions. We validate these claims on the basis of an earlier game-theoretic experiment about the turn-taking game "Marble Drop with Surprising Opponent", in which the opponent often starts with a seemingly irrational move. We explore two ways of segregating the participants into reasonable "player types". The first way is based on latent class analysis, which divides the players into three classes according to their first decisions in the game: Random players, Learners, and Expected players, who make decisions consistent with forward induction. The second way is based on participants' answers to a question about their opponent, classified according to levels of theory of mind: zero-order, first-order and second-order. It turns out that increasing levels of decisions and theory of mind both correspond to increasing success as measured by monetary awards and increasing decision times. Next, we use the logical language to express different kinds of strategies that people apply when reasoning about their opponent and making decisions in turn-taking games, as well as the 'reasoning types' reflected in their behavior. Then,
123Synthese we translate the logical formulas into computational cognitive models in the PRIMs architecture. Finally, we run two of the resulting models, corresponding to the strategy of only being interested in one's own payoff and to the myopic strategy, in which one can only look ahead to a limited number of nodes. It turns out that the participant data fit to the own-payoff strategy, not the myopic one. The article closes the circle from experiments via logic and cognitive modelling back to predictions about new experiments.