We introduce a convergence diagnostic procedure for MCMC that operates by estimating total variation distances for the distribution of the algorithm after certain numbers of iterations. The method has advantages over many existing methods in terms of applicability, utility, and interpretability. It can be used to assess convergence of both marginal and joint posterior densities, and we show how it can be applied to the two most commonly used MCMC samplers-the Gibbs Sampler and the Metropolis Hastings algorithm. In some cases, the computational burden of this method may be large, but we show how lower dimensional analogues of the full-dimensional method are available at a lower computational cost. Illustrative examples highlight the utility and interpretability of the proposed diagnostic, but also highlight some of its limitations.