This paper investigates an adaptive gradient‐based online convex optimization problem over decentralized networks. The nodes of a network aim to track the minimizer of a global time‐varying convex function, and the communication pattern among nodes is captured as a connected undirected graph. To tackle such optimization problems in a collaborative and distributed manner, a weight decay distributed adaptive online gradient algorithm, called WDDAOG, is firstly proposed, which incorporates distributed optimization methods with adaptive strategies. Then, our theoretical analysis clearly illustrates the difference between weight decay and L2 regularization for distributed adaptive gradient algorithms. The dynamic regret bound for the proposed algorithm is further analyzed. It is shown that the dynamic regret bound for convex functions grows with order of
scriptOfalse(nfalse(1+logTfalse)+nTfalse), where T and n represent the time horizon and the number of nodes associated with the network, respectively. Numerical experiments demonstrate that WDDAOG works well in practice and compares favorably to existing distributed online optimization schemes.