1997
DOI: 10.1017/cbo9780511810633
|View full text |Cite
|
Sign up to set email alerts
|

Markov Chains

Abstract: Markov chains are central to the understanding of random processes. This is not only because they pervade the applications of random processes, but also because one can calculate explicitly many quantities of interest. This textbook, aimed at advanced undergraduate or MSc students with some background in basic probability theory, focuses on Markov chains and quickly develops a coherent and rigorous theory whilst showing also how actually to apply it. Both discrete-time and continuous-time chains are studied. A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
1,391
0
26

Year Published

2000
2000
2015
2015

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 1,558 publications
(1,423 citation statements)
references
References 0 publications
6
1,391
0
26
Order By: Relevance
“…A Markov chain is a sequence of random variables where the value taken by each variable (known as the state of the Markov chain) depends only on the value taken by the previous variable (Norris, 1997). We can generate a sequence of states from a Markov chain by iteratively drawing each variable from its distribution conditioned on the previous variable.…”
Section: Markov Chain Monte Carlomentioning
confidence: 99%
“…A Markov chain is a sequence of random variables where the value taken by each variable (known as the state of the Markov chain) depends only on the value taken by the previous variable (Norris, 1997). We can generate a sequence of states from a Markov chain by iteratively drawing each variable from its distribution conditioned on the previous variable.…”
Section: Markov Chain Monte Carlomentioning
confidence: 99%
“…One is based on a classical, i.e. non-probabilistic, reduction relation on probability distributions on classical terms; a second one is defined via a probabilistic transition relation on probabilistic λ-terms which subsumes the classical β-reduction; and a third one is a linear operator semantics which specifies the computational dynamics in the form of so-called Markov chains [20]. All the three semantics are equivalent in the sense that they assign the same meaning to a probabilistic λ-term P .…”
Section: Resultsmentioning
confidence: 99%
“…A distribution π is said to be a limiting distribution of matrix P if there exists a distribution µ such that µ · P n → π as n → ∞; if P has exactly one limiting distribution then it is said to have a unique limiting distribution. If matrix P has a unique limiting distribution π then it is also a unique stationary distribution but not vice versa [18]. Fix a set S = {1, 2, .…”
Section: B Stochastic Matricesmentioning
confidence: 99%