This paper investigates the two-person zero-sum stochastic games for piecewise deterministic Markov decision processes with risk-sensitive finite-horizon cost criterion on a general state space. Here, the transition and cost/reward rates are allowed to be unbounded from below and above. Under some mild conditions, we show the existence of the value of the game and an optimal randomized Markov saddle-point equilibrium in the class of all admissible feedback strategies. By studying the corresponding risk-sensitive finitehorizon optimal differential equations out of a class of possibly unbounded functions, to which the extended Feynman-Kac formula is also justified to hold, we obtain our required results.
We study nonzero-sum stochastic games for continuous time Markov decision processes on a denumerable state space with risk-sensitive ergodic cost criterion. Transition rates and cost rates are allowed to be unbounded. Under a Lyapunov type stability assumption, we show that the corresponding system of coupled HJB equations admits a solution which leads to the existence of a Nash equilibrium in stationary strategies. We establish this using an approach involving principal eigenvalues associated with the HJB equations. Furthermore, exploiting appropriate stochastic representation of principal eigenfunctions, we completely characterize Nash equilibria in the space of stationary Markov strategies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.