“…Usual difficulties involve the solution of the Hamilton-Jacobi-Bellman (HJB) equations associated to the optimal control problem: in [5], the stochastic HJB equation is iteratively solved with successive approximations; in [6], the infinitetime HJB equation is reformulated as an eigenvalue problem; in [7], a transformation approach is proposed for solving the HJB equation arising in quadratic-cost control for nonlinear deterministic and stochastic systems. Finally, in a pair of recent papers, a solution to the nonlinear HJB equation is provided, by expressing it in the form of decoupled Forward and Backward Stochastic Differential Equations (FBSDEs), for an L 2 -and an L 1 -type optimal control setting (see [8,9], respectively). As stated above, the solutions proposed in these references rely on a complete knowledge of the state of the system; thus, they do not require any nonlinear stateestimation algorithm to infer information from noisy measurements.…”