In this paper we study the fully nonlinear stochastic Hamilton-Jacobi-Bellman (HJB) equation for the optimal stochastic control problem of stochastic differential equations with random coefficients. The notion of viscosity solution is introduced, and we prove that the value function of the optimal stochastic control problem is the maximal viscosity solution of the associated stochastic HJB equation. For the superparabolic cases when the diffusion coefficients are deterministic functions of time, states and controls, the uniqueness is addressed as well. (2010): 49L20, 49L25, 93E20, 35D40, 60H15 Keywords: stochastic Hamilton-Jacobi-Bellman equation, optimal stochastic control, backward stochastic partial differential equation, viscosity solution
Mathematics Subject Classificationwhere T ∈ (0, ∞) is a fixed deterministic terminal time. Let U ⊂ R n be a nonempty compact set and U the set of all the U -valued and F t -adapted processes. The process (X t ) t∈[0,T ] is the state process. It is governed by the control θ ∈ U . We sometimes write X r,x;θ t for 0 ≤ r ≤ t ≤ T