Stochastic Games and Applications 2003
DOI: 10.1007/978-94-010-0189-2_17
|View full text |Cite
|
Sign up to set email alerts
|

Perturbations of Markov Chains with Applications to Stochastic Games

Abstract: In this lecture we will review several topics that are extensively used in the study of n-player stochastic games. These tools were used in the proof of several results on non zero-sum stochastic games.Most of the results that are presented here appeared in Vieille (1997a,b), and some appeared in Solan (1998Solan ( , 1999.The first main issue is Markov chains where the transition rule is a Puiseux probability distribution. We define the notion of communicating sets and induce a hierarchy on the collection of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2003
2003
2019
2019

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Notice that for all the solution approaches mentioned in this paragraph and also for the procedure proposed in this work the state-independent game value condition is either imposed directly or implied by the ergodic structure assumption (as in the case of unichain game assumption). As for the perturbation to exploit communication property, the related work is [14]. On the other hand, with respect to the analysis to prove convergence, [16] and [15] are relevant.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Notice that for all the solution approaches mentioned in this paragraph and also for the procedure proposed in this work the state-independent game value condition is either imposed directly or implied by the ergodic structure assumption (as in the case of unichain game assumption). As for the perturbation to exploit communication property, the related work is [14]. On the other hand, with respect to the analysis to prove convergence, [16] and [15] are relevant.…”
Section: Resultsmentioning
confidence: 99%
“…Finally, future research topics that appear as an extension of the analysis in this article are proving our conjecture, and solving communicating games when the state-independent value condition is relaxed and incorporating such solution procedures into the algorithms based on the hierarchical decomposition of the state space into communicating classes as in [1] and [14]. Proof: For states in R, partition P (α, β n ) and P (α, β) as …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This construction allows one to study the sensitivity of various statistics of the Markov chain as one varies the parameter β in a left neighborhood of 1. For more details, see [15], [18], [16].…”
Section: Remarkmentioning
confidence: 99%
“…This decomposition relies on recurrent classes induced by stationary policies. It has been introduced for Markov Decision Processes (MDP) by Ross and Varadarajan (1991), similar classifications have been used by Bather (1973), Solan (2003), Flesch et al (2008). Building on that classification, we consider a family of auxiliary stochastic games and prove that they have a uniform value independent of the initial state.…”
mentioning
confidence: 99%