2006
DOI: 10.1239/jap/1165505206
|View full text |Cite
|
Sign up to set email alerts
|

On the total reward variance for continuous-time Markov reward chains

Abstract: As an extension of the discrete-time case, this note investigates the variance of the total cumulative reward for continuous-time Markov reward chains with finite state spaces. The results correspond to discrete-time results. In particular, the variance growth rate is shown to be asymptotically linear in time. Expressions are provided to compute this growth rate.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Sladky and van Dijk [45], [46] have given results for discrete- and continuous-time chains with fixed rewards. Benito [42] provides variances for discrete chains with random rewards; my proof of Proposition 1 follows his approach.…”
Section: Discussionmentioning
confidence: 99%
“…Sladky and van Dijk [45], [46] have given results for discrete- and continuous-time chains with fixed rewards. Benito [42] provides variances for discrete chains with random rewards; my proof of Proposition 1 follows his approach.…”
Section: Discussionmentioning
confidence: 99%