This paper investigates an adaptive gradient‐based online convex optimization problem over decentralized networks. The nodes of a network aim to track the minimizer of a global time‐varying convex function, and the communication pattern among nodes is captured as a connected undirected graph. To tackle such optimization problems in a collaborative and distributed manner, a weight decay distributed adaptive online gradient algorithm, called WDDAOG, is firstly proposed, which incorporates distributed optimization methods with adaptive strategies. Then, our theoretical analysis clearly illustrates the difference between weight decay and L2 regularization for distributed adaptive gradient algorithms. The dynamic regret bound for the proposed algorithm is further analyzed. It is shown that the dynamic regret bound for convex functions grows with order of
scriptOfalse(nfalse(1+logTfalse)+nTfalse), where T and n represent the time horizon and the number of nodes associated with the network, respectively. Numerical experiments demonstrate that WDDAOG works well in practice and compares favorably to existing distributed online optimization schemes.
This paper addresses a network of computing nodes aiming to solve an online convex optimisation problem in a distributed manner, that is, by means of the local estimation and communication, without any central coordinator. An online distributed conditional gradient algorithm based on the conditional gradient is developed, which can effectively tackle the problem of high time complexity of the distributed online optimisation. The proposed algorithm allows the global objective function to be decomposed into the sum of the local objective functions, and nodes collectively minimise the sum of local time-varying objective functions while the communication pattern among nodes is captured as a connected undirected graph. By adding a regularisation term to the local objective function of each node, the proposed algorithm constructs a new time-varying objective function. The proposed algorithm also utilises the local linear optimisation oracle to replace the projection operation such that the regret bound of the algorithm can be effectively improved. By introducing the nominal regret and the global regret, the convergence properties of the proposed algorithm are also theoretically analysed. It is shown that, if the objective function of each agent is strongly convex and smooth, these two types of regrets grow sublinearly with the order of O(log T ), where T is the time horizon. Numerical experiments also demonstrate the advantages of the proposed algorithm over existing distributed optimisation algorithms.
This paper investigates a distributed online optimization problem with convex objective functions on time-varying directed networks, where each agent holds its own convex cost function and the goal is to cooperatively minimize the sum of the local cost functions. To tackle such optimization problems, an accelerated distributed online gradient push-sum algorithm is firstly proposed, which combines the momentum acceleration technique and the push-sum strategy. Then, we specifically analyze the regret for the proposed algorithm. The theoretical result shows that the individual regret of the proposed algorithm achieves a sublinear regret with order of
( √ T) , where T is the time horizon. Moreover, we implement the proposed algorithm in sensor networks for solving the distributed online estimation problem, and the results illustrate the effectiveness of the proposed algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.