We prove explicit, i.e. non-asymptotic, error bounds for Markov chain Monte Carlo methods. The problem is to compute the expectation of a function f with respect to a measure π. Different convergence properties of Markov chains imply different error bounds. For uniformly ergodic and reversible Markov chains we prove a lower and an upper error bound with respect to f 2 . If there exists an L2-spectral gap, which is a weaker convergence property than uniform ergodicity, then we show an upper error bound with respect to f p for p > 2. Usually a burn-in period is an efficient way to tune the algorithm. We provide and justify a recipe how to choose the burn-in period. The error bounds are applied to the problem of the integration with respect to a possibly unnormalized density. More precise, we consider the integration with respect to logconcave densities and the integration over convex bodies. By the use of the Metropolis algorithm based on a ball walk and the hit-and-run algorithm it is shown that both problems are polynomial tractable.