Abstract. We consider two-person zero-sum stochastic mean payoff games with perfect information, or BWR-games, given by a digraph G = (V = VB ∪ VW ∪ VR, E), with local rewards r : E → R, and three types of vertices: black VB, white VW , and random VR. The game is played by two players, White and Black: When the play is at a white (black) vertex v, White (Black) selects an outgoing arc (v, u). When the play is at a random vertex v, a vertex u is picked with the given probability p (v, u). In all cases, Black pays White the value r(v, u). The play continues forever, and White aims to maximize (Black aims to minimize) the limiting mean (that is, average) payoff. It was recently shown in [BEGM09a] that BWR-games are polynomially equivalent with the classical Gillette games, which include many well-known subclasses, such as cyclic games, simple stochastic games (SSG), stochastic parity games, and Markov decision processes. In this paper, we give a new algorithm for solving BWR-games in the ergodic case, that is when the optimal values do not depend on the initial position. Our algorithm solves a BWR-game by reducing it, using a potential transformation, to a canonical form in which the value and optimal strategies of both players are obvious for every initial position, since a locally optimal move in it is optimal in the whole game. We show that this algorithm is pseudo-polynomial when the number of random nodes is constant. We also provide an almost matching lower bound on its running time, and show that this bound holds for a wider class of algorithms. Our preliminary experiments with the algorithm indicate that it behaves much better than its estimated worst-case running time. This and the fact that every iteration is simple and can be implemented in a distributed way makes it a good candidate for practical applications.