We introduce a modification of Perron's method, where semisolutions are considered in a carefully defined asymptotic sense. With this definition, we can show, in a rather elementary way, that in a zero-sum game or a control problem (with or without model uncertainty), the value function over all strategies coincides with the value function over Markov strategies discretized in time. Therefore, there are always discretized Markov ε-optimal strategies (uniform with respect to the bounded initial condition). With a minor modification, the method produces a value and approximate saddle points for an asymmetric game of feedback strategies versus counterstrategies.
Introduction.The aim of the paper is to introduce the asymptotic Perron's method, i.e., constructing a solution of the Hamilton-Jacobi-Belman-Isaacs (HJBI) equation as the supremum/infimum of carefully defined asymptotic semisolutions. Using this method we show, in a rather elementary way, that the value functions of zero-sum games/control problems can be (uniformly) approximated by some simple Markov strategies for the weaker player (the player in the exterior of the sup/inf or inf/sup). From this point of view, we can think of the method as an alternative to the shaken coefficients method of Krylov [Kry00] (in the case of only one player, under slightly different technical assumptions) or to the related method of regularization of solutions of HJBIs byŚwiȩch in [Świ96a] and [Świ96b] (for control problems or games in Elliott-Kalton formulation). The method of shaken coefficients has been recently used to study games in Elliott-Kalton formulation in [BN] under a convexity assumption (not needed here).While the result on zero-sum games (under our standing assumptions) is rather new, but certainly expected, the goal of the paper is to present the method. To the best of our knowledge, this modification of Perron's method does not appear in the literature. In addition, we believe that it applies to more general situations than we consider here and using either a stochastic formulation (as in the present work) or an analytic one (see Remark 3.1). Compared to the method of shaken coefficients of Krylov, or to the regularization of solutions byŚwiȩch, the analytic approximation of the value function/solution of HJB by smooth approximate solutions is replaced by the Perron construction. The careful definition of asymptotic semisolutions allows us to prove that such semisolutions work well with Markov strategies. The idea of restricting actions to a deterministic time grid is certainly not new; we just provide a method that works well for such strategies/controls. The arguments display once again the *