Abstract. We prove that to find optimal positional strategies for stochastic mean payoff games when the value of every state of the game is known, in general, is as hard as solving such games tout court. This answers a question posed by Daniel Andersson and Peter Bro Miltersen.In this note, we consider perfect information 0-sum stochastic games, which, for short, we will just call stochastic games. For us, a stochastic game is a finite directed graph whose vertices we call states and whose edges we call transitions, multiple edges and loops are allowed but no state can be a sink. To each state s is associated an owner o(s) which is one of the two players Max and Min. Each transition s A,p −−→ t has an action A and a probability p ∈ Q ∩ [0, 1], with the condition that, for each state s, the probabilities of the transitions exiting s associated to the same action must sum to 1. We say that the action A is available at state s if one of the transitions exiting s is associated to A. Furthermore to each action A is associated a reward r(A) ∈ Q.A play of a stochastic game G begins in some state s 0 and produces an unending sequence of states {s i } i∈N and actions {A i } i∈N . At move i, the owner of the current state s i chooses an action A i among those available at s i , then one of the transitions exiting s i with action A i is selected at random according to their respective probabilities, and the next state s i+1 is the destination of the chosen transition. A play can be evaluated according to the β-discounted payoff criterion v β (A 0 , A 1 . . .) = (1 − β)