Abstract-The Vũ-Condat algorithm is a standard method for finding a saddle point of a Lagrangian involving a differentiable function. Recent works have tried to adapt the idea of random coordinate descent to this algorithm, with the aim to efficiently solve some regularized or distributed optimization problems. A drawback of these approaches is that the admissible step sizes can be small, leading to slow convergence. In this paper, we introduce a coordinate descent primal-dual algorithm which is provably convergent for a wider range of step size values than previous methods. In particular, the condition on the step-sizes depends on the coordinate-wise Lipschitz constant of the differentiable function's gradient. We discuss the application of our method to distributed optimization and large scale support vector machine problems.