“…One could easily deduce from our proofs that, with high probability, the entropy of the distribution P t (i, ·) on the timeinterval [0, t ent ] grows roughly linearly, at rate H. This in turns implies that the entropy of π is (1 − o(1)) log n with high probability. Consequently, we see that the cutoff occurs precisely when the entropy of the chain reaches the entropy of the invariant distribution, and that the mixing time is given by the entropy at stationarity divided by the average single step entropy H. Interestingly, the same interpretation can be given to the main results in the models studied in [21,7,6,9]. It is thus perhaps tempting to believe that this scenario should apply to a much larger class of Markov chains in random environments, although we do not have a precise conjecture to propose at the present time.…”