This paper focuses on an online version of the emerging distributed constrained aggregative optimization framework, which is particularly suited for applications arising in cooperative robotics. Agents in a network want to minimize the sum of local cost functions, each one depending both on a local optimization variable, subject to a local constraint, and on an aggregated version of all the variables (e.g., the mean). We focus on a challenging online scenario in which the cost, the aggregation functions and the constraints can all change over time, thus enlarging the class of captured applications. Inspired by an existing scheme, we propose a distributed algorithm with constant step size, named Projected Aggregative Tracking, to solve the online optimization problem. We prove that the dynamic regret is bounded by a constant term and a linear term related to time variations. Moreover, in the static case (i.e., with constant cost and constraints), the solution estimates are proved to converge with a linear rate to the optimal solution. Finally, numerical examples show the efficacy of the proposed approach on a multi-robot basketball game and a robotic surveillance scenario.
In this letter we address nonconvex distributed consensus optimization, a popular framework for distributed big-data analytics and learning. We consider the Gradient Tracking algorithm and, by resorting to an elegant system theoretical analysis, we show that agent estimates asymptotically reach consensus to a stationary point. We take advantage of suitable coordinates to write the Gradient Tracking as the interconnection of a fast dynamics and a slow one. To use a singular perturbation analysis, we separately study two auxiliary subsystems called boundary layer and reduced systems, respectively. We provide a Lyapunov function for the boundary layer system and use Lasalle-based arguments to show that trajectories of the reduced system converge to the set of stationary points. Finally, a customized version of a Lasalle's Invariance Principle for singularly perturbed systems is proved to show the convergence properties of the Gradient Tracking.
This paper deals with a network of computing agents aiming to solve an online optimization problem in a distributed fashion, i.e., by means of local computation and communication, without any central coordinator. We propose the gradient tracking with adaptive momentum estimation (GTAdam) distributed algorithm, which combines a gradient tracking mechanism with first and second order momentum estimates of the gradient. The algorithm is analyzed in the online setting for strongly convex cost functions with Lipschitz continuous gradients. We provide an upper bound for the dynamic regret given by a term related to the initial conditions and another term related to the temporal variations of the objective functions. Moreover, a linear convergence rate is guaranteed in the static setup. The algorithm is tested on a time-varying classification problem, on a (moving) target localization problem, and in a stochastic optimization setup from image classification. In these numerical experiments from multi-agent learning, GTAdam outperforms state-of-the-art distributed optimization methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.