Technological advances in ad-hoc networking and the availability of low-cost reliable computing, data storage and sensing devices have made possible scenarios where the coordination of many subsystems extends the range of human capabilities. Smart grid operations, smart transportation, smart healthcare and sensing networks for environmental monitoring and exploration in hazardous situations are just a few examples of such network operations. In these applications, the ability of a network system to, in a decentralized fashion, fuse information, compute common estimates of unknown quantities, and agree on a common view of the world is critical. These problems can be formulated as agreement problems on linear combinations of dynamically changing reference signals or local parameters. This dynamic agreement problem corresponds to dynamic average consensus, which is the problem of interest of this article. The dynamic average consensus problem is for a group of agents to cooperate in order to track the average of locally available time-varying reference signals, where each agent is only capable of local computations and communicating with local neighbors, see Figure 1. Figure 1: A group of communication agents, each endowed with a time-varying reference signal. 1 arXiv:1803.04628v2 [cs.SY] 24 Nov 2018 Centralized solutions have drawbacksThe difficulty in the dynamic average consensus problem is that the information is distributed across the network. A straightforward solution, termed centralized, to the dynamic average consensus problem appears to be to gather all of the information in a single place, do the computation (in other words, calculate the average), and then send the solution back through the network to each agent. Although simple, the centralized approach has numerous drawbacks: (1) the algorithm is not robust to failures of the centralized agent (if the centralized agent fails, then the entire computation fails), (2) the method is not scalable since the amount of communication and memory required on each agent scales with the size of the network, (3) each agent must have a unique identifier (so that the centralized agent counts each value only once), (4) the calculated average is delayed by an amount which grows with the size of the network, and (5) the reference signals from each agent are exposed over the entire network which is unacceptable in applications involving sensitive data. The centralized solution is fragile due to existence of a single failure point in the network. This can be overcome by having every agent act as the centralized agent. In this approach, referred to as flooding, agents transmit the values of the reference signals across the entire network until each agent knows each reference signal. This may be summarized as "first do all communications, then do all computations". While flooding fixes the issue of robustness to agent failures, it is still subject to many of the drawbacks of the centralized solution. Also, although this approach works reasonably well for small size networks,...
This work proposes an accelerated first-order algorithm we call the Robust Momentum Method for optimizing smooth strongly convex functions. The algorithm has a single scalar parameter that can be tuned to trade off robustness to gradient noise versus worst-case convergence rate. At one extreme, the algorithm is faster than Nesterov's Fast Gradient Method by a constant factor but more fragile to noise. At the other extreme, the algorithm reduces to the Gradient Method and is very robust to noise. The algorithm design technique is inspired by methods from classical control theory and the resulting algorithm has a simple analytical form. Algorithm performance is verified on a series of numerical simulations in both noise-free and relative gradient noise cases.Notation. The set of functions that are m-strongly convex and L-smooth is denoted F(m, L). In particular, f ∈ F(m, L) if for all x, y ∈ R n ,The condition ratio is defined as κ := L/m.2 A numerical study in [3] revealed that the standard rate bound for FGM derived in [2] is conservative. Nevertheless, the bound has a simple algebraic form and is asymptotically tight.
This work concerns the analysis and design of distributed first-order optimization algorithms over time-varying graphs. The goal of such algorithms is to optimize a global function that is the average of local functions using only local computations and communications. Several different algorithms have been proposed that achieve linear convergence to the global optimum when the local functions are strongly convex. We provide a unified analysis that yields a worstcase linear convergence rate as a function of the condition number of the local functions, the spectral gap of the graph, and the parameters of the algorithm. The framework requires solving a small semidefinite program whose size is fixed; it does not depend on the number of local functions or the dimension of the domain. The result is a computationally efficient method for distributed algorithm analysis that enables the rapid comparison, selection, and tuning of algorithms. Finally, we propose a new algorithm, which we call SVL, that is easily implementable and achieves the fastest possible worst-case convergence rate among all algorithms in the family we considered. We support our theoretical analysis with numerical experiments that generate worst-case examples demonstrating the effectiveness of SVL.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.