We consider N -player and mean field games in continuous time over a finite horizon, where the position of each agent belongs to {−1, 1}. If there is uniqueness of mean field game solutions, e.g. under monotonicity assumptions, then the master equation possesses a smooth solution which can be used to prove convergence of the value functions and of the feedback Nash equilibria of the N -player game, as well as a propagation of chaos property for the associated optimal trajectories. We study here an example with anti-monotonous costs, and show that the mean field game has exactly three solutions. We prove that the value functions converge to the entropy solution of the master equation, which in this case can be written as a scalar conservation law in one space dimension, and that the optimal trajectories admit a limit: they select one mean field game soution, so there is propagation of chaos. Moreover, viewing the mean field game system as the necessary conditions for optimality of a deterministic control problem, we show that the N -player game selects the optimizer of this problem. 1 2 ALEKOS CECCHIN, PAOLO DAI PRA, MARKUS FISCHER, AND GUGLIELMO PELINO are shown to be concentrated on weak solutions of the corresponding mean field game. This concept of solution is also used in another, more recent work by Lacker; see below.Here, we are interested in the convergence problem for Nash equilibria in Markov feedback strategies with full state information. A first result in this direction was given by Gomes, Mohr, and Souza [19] in the case of finite state dynamics. There, convergence of Markovian Nash equilibria to the mean field game limit is proved, but only if the time horizon is small enough. A breakthrough was achieved by Cardaliaguet, Delarue, Lasry, and Lions in [7]. In the setting of games with non-degenerate Brownian dynamics, possibly including common noise, those authors establish convergence to the mean field game limit, in the sense of convergence of value functions as well as propagation of chaos for the optimal state trajectories, for arbitrary time horizon provided the so-called master equation associated with the mean field game possesses a unique sufficiently regular solution. The master equation arises as the formal limit of the Hamilton-Jacobi-Bellman systems determining the Markov feedback Nash equilibria. It yields, if well-posed, the optimal value in the mean field game as a function of initial time, state and distribution. It thus also provides the optimal control action, again as a function of time, state, and measure variable. This allows, in particular, to compare the prelimit Nash equilibria to the solution of the limit model through coupling arguments.If the master equation possesses a unique regular solution, which is guaranteed under the Lasry-Lions monotonicity conditions, then the convergence analysis can be considerably refined. In this case, for games with finite state dynamics, Cecchin and Pelino [11] and, independently, Bayraktar and Cohen [3] obtain a central limit theorem and lar...