2005
DOI: 10.1007/3-540-27679-3_40
|View full text |Cite
|
Sign up to set email alerts
|

Total Reward Variance in Discrete and Continuous Time Markov Chains

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…This is based on the mathematical framework of Markov chains with rewards (MCWR), introduced by Howard (1960) in the context of dynamic programming (see also, e.g., Benito 1982;Puterman 1994;Sladkỳ and van Dijk 2005). An individual moves among states according to a finite-state Markov chain.…”
Section: Markov Chains With Rewardsmentioning
confidence: 99%
“…This is based on the mathematical framework of Markov chains with rewards (MCWR), introduced by Howard (1960) in the context of dynamic programming (see also, e.g., Benito 1982;Puterman 1994;Sladkỳ and van Dijk 2005). An individual moves among states according to a finite-state Markov chain.…”
Section: Markov Chains With Rewardsmentioning
confidence: 99%
“…Several authors in the widely scattered literature on Markov chains with rewards have addressed the variance of accumulated rewards. Sladky and van Dijk [45] , [46] have given results for discrete- and continuous-time chains with fixed rewards. Benito [42] provides variances for discrete chains with random rewards; my proof of Proposition 1 follows his approach.…”
Section: Discussionmentioning
confidence: 99%
“…The strategy to obtain the approximations is to exploit the fact that the evolution over time of total cost corresponds to the evolution of an aperiodic and recurrent Markov chain with an infinite number of states and a unique stationary distribution. In particular, we will use Markov reward process theory (Sladkỳ & van Dijk, 2005; van Dijk & Sladkỳ, 2006). Although Sladkỳ and van Dijk (2005) and van Dijk and Sladkỳ (2006) consider finite‐state Markov chains, their technique is also applicable to infinite‐state, aperiodic Markov chains with a unique stationary distribution, which is precisely the case for the evolution of K ( S ).…”
Section: Total Costmentioning
confidence: 99%
“…In particular, we will use Markov reward process theory (Sladkỳ & van Dijk, 2005; van Dijk & Sladkỳ, 2006). Although Sladkỳ and van Dijk (2005) and van Dijk and Sladkỳ (2006) consider finite‐state Markov chains, their technique is also applicable to infinite‐state, aperiodic Markov chains with a unique stationary distribution, which is precisely the case for the evolution of K ( S ).…”
Section: Total Costmentioning
confidence: 99%