We consider the problem of distributed online convex optimization, where a group of agents collaborate to track the trajectory of the global minimizers of sums of time-varying objective functions in an online manner. For general convex functions, the theoretical upper bounds of existing methods are given in terms of regularity measures associated with the dynamical system as well as the time horizon. It is thus of interest to determine whether the explicit time horizon dependence can be removed as in the case of centralized optimization. In this work, we propose a novel distributed online gradient descent algorithm and show that the dynamic regret bound of this algorithm has no explicit dependence on the time horizon. Instead, it depends on a new regularity measure quantifying the total change in gradients at the optimal points at each time. The main driving force of our algorithm is an online adaptation of the gradient tracking technique used in static optimization. Since, in many applications, time-varying objective functions and the corresponding optimal points follow a non-adversarial dynamical system, we also consider the role of prediction assuming that the optimal points evolve according to a linear dynamical system. We present numerical experiments that show that our proposed algorithm outperforms the existing distributed mirror descentbased state of the art methods in term of the optimizer tracking performance. We also present an empirical example suggesting that the analysis of our algorithm is optimal in the sense that the regularity measures in the theoretical bounds cannot be removed.