A bstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys the field and presents some new examples. There is a simple unifying idea: the iterates of random Lipschitz functions converge if the functions are contracting on the average.
Introduction.The applied probability literature is nowadays quite daunting. Even relatively simple topics, like Markov chains, have generated enormous complexity. This paper describes a simple idea that helps to unify many arguments in Markov chains, simulation algorithms, control theory, queuing, and other branches of applied probability. The idea is that Markov chains can be constructed by iterating random functions on the state space S. More specifically, there is a family {f θ : θ ∈ Θ} of functions that map S into itself, and a probability distribution µ on Θ. If the chain is at x ∈ S, it moves by choosing θ at random from µ, and going to f θ (x). For now, µ does not depend on x.The process can be written aswhere θ 1 , θ 2 , . . . are independent draws from µ. The Markov property is clear: given the present position of the chain, the conditional distribution of the future does not depend on the past.