2017
DOI: 10.1007/s00440-017-0796-7
|View full text |Cite
|
Sign up to set email alerts
|

Random walk on sparse random digraphs

Abstract: A finite ergodic Markov chain exhibits cutoff if its distance to equilibrium remains close to its initial value over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Originally discovered in the context of card shuffling (Aldous-Diaconis, 1986), this remarkable phenomenon is now rigorously established for many reversible chains.Here we consider the non-reversible case of random walks on sparse directed graphs, for which even the equilibrium measure is far from bein… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

4
73
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 46 publications
(77 citation statements)
references
References 41 publications
(79 reference statements)
4
73
0
Order By: Relevance
“…One could easily deduce from our proofs that, with high probability, the entropy of the distribution P t (i, ·) on the timeinterval [0, t ent ] grows roughly linearly, at rate H. This in turns implies that the entropy of π is (1 − o(1)) log n with high probability. Consequently, we see that the cutoff occurs precisely when the entropy of the chain reaches the entropy of the invariant distribution, and that the mixing time is given by the entropy at stationarity divided by the average single step entropy H. Interestingly, the same interpretation can be given to the main results in the models studied in [21,7,6,9]. It is thus perhaps tempting to believe that this scenario should apply to a much larger class of Markov chains in random environments, although we do not have a precise conjecture to propose at the present time.…”
Section: 2supporting
confidence: 61%
See 4 more Smart Citations
“…One could easily deduce from our proofs that, with high probability, the entropy of the distribution P t (i, ·) on the timeinterval [0, t ent ] grows roughly linearly, at rate H. This in turns implies that the entropy of π is (1 − o(1)) log n with high probability. Consequently, we see that the cutoff occurs precisely when the entropy of the chain reaches the entropy of the invariant distribution, and that the mixing time is given by the entropy at stationarity divided by the average single step entropy H. Interestingly, the same interpretation can be given to the main results in the models studied in [21,7,6,9]. It is thus perhaps tempting to believe that this scenario should apply to a much larger class of Markov chains in random environments, although we do not have a precise conjecture to propose at the present time.…”
Section: 2supporting
confidence: 61%
“…The present paper considerably extends the results in [9] by establishing cutoff for a large class of non-reversible sparse stochastic matrices, not necessarily arising as the transition matrix of the random walk on a (directed) graph. The time-irreversibility actually plays a crucial role in our proofs: despite the lack of an explicit underlying structure, the Markov chains that we consider turn out to exhibit a spontaneous "non-backtracking" tendency which allows us to establish a certain i.i.d.…”
Section: 2mentioning
confidence: 55%
See 3 more Smart Citations