2017
DOI: 10.1007/s10479-017-2538-8
|View full text |Cite
|
Sign up to set email alerts
|

Kolmogorov’s equations for jump Markov processes with unbounded jump rates

Abstract: As is well-known, transition probabilities of jump Markov processes satisfy Kolmogorov's backward and forward equations. In the seminal 1940 paper, William Feller investigated solutions of Kolmogorov's equations for jump Markov processes. Recently the authors solved the problem studied by Feller and showed that the minimal solution of Kolmogorov's backward and forward equations is the transition probability of the corresponding jump Markov process if the transition rate at each state is bounded. This paper pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
19
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(19 citation statements)
references
References 15 publications
0
19
0
Order By: Relevance
“…Proof. This follows from the reasoning of the proof of Lemma 1(a) in [14] based on the Novikov seperation theorem. ✷…”
Section: Existence Of a Deterministic Stationary Optimal Policymentioning
confidence: 96%
“…Proof. This follows from the reasoning of the proof of Lemma 1(a) in [14] based on the Novikov seperation theorem. ✷…”
Section: Existence Of a Deterministic Stationary Optimal Policymentioning
confidence: 96%
“…Several of the statements below do not need the bounding function w in Condition 3 to be continuous. In this connection, we also mention that a Borel measurable function w satisfying the inequality in Condition 1 always exists; see Lemma 1 of [9] and recall (1).…”
Section: Optimality Resultsmentioning
confidence: 99%
“…Proof of Theorem 3.1. The following statement is a consequence of Theorem 4.2 of [16], see also [18], and is the starting point of our reasoning. The above lemma implies that, without loss of generality, we can restrict to the class of Markov policies for problems (I) and (II), i.e.…”
mentioning
confidence: 79%