This paper studies three ways to construct a nonhomogeneous jump Markov process: (i) via a compensator of the random measure of a multivariate point process, (ii) as a minimal solution of the backward Kolmogorov equation, and (iii) as a minimal solution of the forward Kolmogorov equation. The main conclusion of this paper is that, for a given measurable transition intensity, commonly called a Q-function, all these constructions define the same transition function. If this transition function is regular, that is, the probability of accumulation of jumps is zero, then this transition function is the unique solution of the backward and forward Kolmogorov equations. For continuous Q-functions, Kolmogorov equations were studied in Feller's seminal paper. In particular, this paper extends Feller's results for continuous Q-functions to measurable Q-functions and provides additional results.
As is well-known, transition probabilities of jump Markov processes satisfy Kolmogorov's backward and forward equations. In the seminal 1940 paper, William Feller investigated solutions of Kolmogorov's equations for jump Markov processes. Recently the authors solved the problem studied by Feller and showed that the minimal solution of Kolmogorov's backward and forward equations is the transition probability of the corresponding jump Markov process if the transition rate at each state is bounded. This paper presents more general results. For Kolmogorov's backward equation, the sufficient condition for the described property of the minimal solution is that the transition rate at each state is locally integrable, and for Kolmogorov's forward equation the corresponding sufficient condition is that the transition rate at each state is locally bounded.
This paper extends to Continuous-Time Jump Markov Decision Processes (CTJMDP) the classic result for Markov Decision Processes stating that, for a given initial state distribution, for every policy there is a (randomized) Markov policy, which can be defined in a natural way, such that at each time instance the marginal distributions of state-action pairs for these two policies coincide. It is shown in this paper that this equality takes place for a CTJMDP if the corresponding Markov policy defines a nonexplosive jump Markov process. If this Markov process is explosive, then at each time instance the marginal probability, that a state-action pair belongs to a measurable set of state-action pairs, is not greater for the described Markov policy than the same probability for the original policy. These results are used in this paper to prove that for expected discounted total costs and for average costs per unit time, for a given initial state distribution, for each policy for a CTJMDP the described a Markov policy has the same or better performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.