In this paper, we study the convergence rate of the DCA (Difference-of-Convex Algorithm), also known as the convex-concave procedure. The DCA is a popular algorithm for difference-of-convex (DC) problems, and known to converge to a stationary point under some assumptions. We derive a worst-case convergence rate of O(1/ √ N ) after N iterations of the objective gradient norm for certain classes of unconstrained DC problems. For constrained DC problems with convex feasible sets, we obtain a O(1/N ) convergence rate (in a well-defined sense). We give an example which shows the order of convergence cannot be improved for a certain class of DC functions. In addition, we obtain the same convergence rate for the DCA with regularization. Our results complement recent convergence rate results from the literature where it is assumed that the objective function satisfies the Lojasiewicz gradient inequality at stationary points. In particular, we do not make this assumption.