2012
DOI: 10.1007/s10479-012-1184-4
|View full text |Cite
|
Sign up to set email alerts
|

Uniform ergodicity of continuous-time controlled Markov chains: A survey and new results

Abstract: We make a review of several variants of ergodicity for continuous-time Markov chains on a countable state space. These include strong ergodicity, ergodicity in weightednorm spaces, exponential and subexponential ergodicity. We also study uniform exponential ergodicity for continuous-time controlled Markov chains, as a tool to deal with average reward and related optimality criteria. A discussion on the corresponding ergodicity properties is made, and an application to a controlled population system is shown.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 42 publications
(61 reference statements)
0
9
0
Order By: Relevance
“…Theorem 4 is a modified version of Theorem 3.1 in [12], while Theorem 6 is a continuous counterpart of Proposition 5.1.2 in Zurkowski [5]. Finally we have Theorem 7 which follows from Theorem 2.16 of [11].…”
Section: Resultsmentioning
confidence: 76%
See 2 more Smart Citations
“…Theorem 4 is a modified version of Theorem 3.1 in [12], while Theorem 6 is a continuous counterpart of Proposition 5.1.2 in Zurkowski [5]. Finally we have Theorem 7 which follows from Theorem 2.16 of [11].…”
Section: Resultsmentioning
confidence: 76%
“…The studies we relied on such as Connor and Fort [3] applied the results of their work to "tame" chains (which technically speaking are any chains with subgeometric drift ( ) ∼ [ln ] − ). Also the study on continuous-time controlled Markov chains on a countable state space by [11] was applied on discounted and average reward optimality criteria. Therefore our future research will focus on making any possible improvements to this study by refining our results and identifying applications of our results.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As for part (b), as in Remark 3.3, one can see that for each t 0 > 0, the family {η f t , t ≥ t 0 } is tight for each initial state z ∈ S. As a result, the controlled process (under each deterministic stationary policy) is bounded in probability on average, and now part (b) follows from Theorem 3.1 of [32]. The reasoning in the proof of Theorem 3.13 in [41] applies to show that under the conditions of the statement, the A-CTMDP model (and thus each of theÂ-CTMDP model) is uniformly w ′exponentially ergodic with respect to all randomized stationary policies. Following from this, parts (a) and (c) immediately hold; for part (a), further see the reasoning in the proof of Lemma 7.7 of [20].…”
Section: Proposition 33 Suppose Conditions 21 31 and 33 Are Satmentioning
confidence: 80%
“…In view of the above theorems the result from [12] can be sharpened. In Section 5 we will discuss this and compare the result to a criterion used in [4,10,9].…”
mentioning
confidence: 99%