2006
DOI: 10.1287/opre.1060.0291
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Importance Sampling Technique for Markov Chains Using Stochastic Approximation

Abstract: For a discrete-time finite-state Markov chain, we develop an adaptive importance sampling scheme to estimate the expected total cost before hitting a set of terminal states. This scheme updates the change of measure at every transition using constant or decreasing step-size stochastic approximation. The updates are shown to concentrate asymptotically in a neighborhood of the desired zero variance estimator. Through simulation experiments on simple Markovian queues, we observe that the proposed technique perfor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
70
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 56 publications
(70 citation statements)
references
References 31 publications
0
70
0
Order By: Relevance
“…Thus, the dominant paths have probability Θ (1) and the non-dominant paths have probability o (1). This implies that p c (y) → 1, and then that p 0 → 1, when ε → 0.…”
Section: Theorem 1 Under Assumption 1 If Lim ε→0 V(y)/µ(y) = 1 For Amentioning
confidence: 95%
See 4 more Smart Citations
“…Thus, the dominant paths have probability Θ (1) and the non-dominant paths have probability o (1). This implies that p c (y) → 1, and then that p 0 → 1, when ε → 0.…”
Section: Theorem 1 Under Assumption 1 If Lim ε→0 V(y)/µ(y) = 1 For Amentioning
confidence: 95%
“…. , c. We assume that each component is either in a failed state or in an operational state, and that the system evolves as a CTMC whose state is a vector y = (y (1) , . .…”
Section: Markovian Model Of a Highly Reliable Systemmentioning
confidence: 99%
See 3 more Smart Citations