In this paper, we use recent results of Jarner & Roberts ( "Ann. Appl. Probab.," 12, 2002, 224) to show polynomial convergence rates of Monte Carlo Markov Chain algorithms with polynomial target distributions, in particular random-walk Metropolis algorithms, Langevin algorithms and independence samplers. We also use similar methodology to consider polynomial convergence of the Gibbs sampler on a constrained state space. The main result for the random-walk Metropolis algorithm is that heavy-tailed proposal distributions lead to higher rates of convergence and thus to qualitatively better algorithms as measured, for instance, by the existence of central limit theorems for higher moments. Thus, the paper gives for the first time a theoretical justification for the common belief that heavy-tailed proposal distributions improve convergence in the context of random-walk Metropolis algorithms. Similar results are shown to hold for Langevin algorithms and the independence sampler, while results for the mixing of Gibbs samplers on uniform distributions on constrained spaces are rather different in character. Copyright 2007 Board of the Foundation of the Scandinavian Journal of Statistics..