Recently, there has been growing interest in solving consensus optimization problems in a multiagent network. In this paper, we develop a decentralized algorithm for the consensus optimization problem minimize x∈R pf (x) = 1 n n i=1 f i (x), which is defined over a connected network of n agents, where each function f i is held privately by agent i and encodes the agent's data and objective. All the agents shall collaboratively find the minimizer while each agent can only communicate with its neighbors. Such a computation scheme avoids a data fusion center or long-distance communication and offers better load balance to the network. This paper proposes a novel decentralized exact first-order algorithm (abbreviated as EXTRA) to solve the consensus optimization problem. "Exact" means that it can converge to the exact solution. EXTRA uses a fixed, large step size, which can be determined independently of the network size or topology. The local variable of every agent i converges uniformly and consensually to an exact minimizer off . In contrast, the well-known decentralized gradient descent (DGD) method must use diminishing step sizes in order to converge to an exact minimizer. EXTRA and DGD have the same choice of mixing matrices and similar periteration complexity. EXTRA, however, uses the gradients of the last two iterates, unlike DGD which uses just that of the last iterate. EXTRA has the best known convergence rates among the existing synchronized first-order decentralized algorithms for minimizing convex Lipschitz-differentiable functions. Specifically, if the f i 's are convex and have Lipschitz continuous gradients, EXTRA has an ergodic convergence rate O( 1 k ) in terms of the first-order optimality residual. In addition, as long asf is (restricted) strongly convex (not all individual f i 's need to be so), EXTRA converges to an optimal solution at a linear rate O(C −k ) for some constant C > 1. [9,22,26], estimation [1,2,16,18,29], sparse optimization [19,35], and low-rank matrix completion [20] problems. Functions f i can take the form of least squares [7,15,34], regularized least squares [1,2,9,18,22], as well as more general ones [26]. EXACT ALGORITHM FOR DECENTRALIZED OPTIMIZATION 945The solution x can represent, for example, the average temperature of a room [7, 34], frequency-domain occupancy of spectra [1, 2], states of a smart grid system [10, 16], sparse vectors [19,35], a matrix factor [20], and so on. In general, decentralized optimization fits the scenarios in which the data are collected and/or stored in a distributed network, a fusion center is either infeasible or not economical, and/or computing is required to be performed in a decentralized and collaborative manner by multiple agents. Related methods.Existing first-order decentralized methods for solving (1.1) include the (sub)gradient method [21,25,36], the (sub)gradient-push method [23,24], the fast (sub)gradient method [5,14], and the dual averaging method [8]. Compared to classical centralized algorithms, decentralized algorithms encoun...
In decentralized consensus optimization, a connected network of agents collaboratively minimize the sum of their local objective functions over a common decision variable, where their information exchange is restricted between the neighbors. To this end, one can first obtain a problem reformulation and then apply the alternating direction method of multipliers (ADMM). The method applies iterative computation at the individual agents and information exchange between the neighbors. This approach has been observed to converge quickly and deemed powerful. This paper establishes its linear convergence rate for the decentralized consensus optimization problem with strongly convex local objective functions.The theoretical convergence rate is explicitly given in terms of the network topology, the properties of local objective functions, and the algorithm parameter. This result is not only a performance guarantee but also a guideline toward accelerating the ADMM convergence.
Consider the consensus problem of minimizing f (x) = n i=1 fi(x), where x ∈ R p and each fi is only known to the individual agent i in a connected network of n agents. To solve this problem and obtain the solution, all the agents collaborate with their neighbors through information exchange. This type of decentralized computation does not need a fusion center, offers better network load balance, and improves data privacy. This paper studies the decentralized gradient descent method [20], in which each agent i updates its local variable x (i) ∈ R n by combining the average of its neighbors' with a local negative-gradient step −α∇fi(x (i) ). The method is described by the iteration (1) depends on the stepsize, function convexity, and network spectrum.
This paper proposes a decentralized algorithm for solving a consensus optimization problem defined in a static networked multi-agent system, where the local objective functions have the smooth+nonsmooth composite form. Examples of such problems include decentralized constrained quadratic programming and compressed sensing problems, as well as many regularization problems arising in inverse problems, signal processing, and machine learning, which have decentralized applications. This paper addresses the need for efficient decentralized algorithms that take advantages of proximal operations for the nonsmooth terms.We propose a proximal gradient exact first-order algorithm (PG-EXTRA) that utilizes the composite structure and has the best known convergence rate. It is a nontrivial extension to the recent algorithm EXTRA. At each iteration, each agent locally computes a gradient of the smooth part of its objective and a proximal map of the nonsmooth part, as well as exchanges information with its neighbors. The algorithm is "exact" in the sense that an exact consensus minimizer can be obtained with a fixed step size, whereas most previous methods must use diminishing step sizes. When the smooth part has Lipschitz gradients, PG-EXTRA has an ergodic convergence rate of O 1 k in terms of the first-order optimality residual. When the smooth part vanishes, PG-EXTRA reduces to P-EXTRA, an algorithm without the gradients (so no "G" in the name), which has a slightly improved convergence rate at o 1 k in a standard (non-ergodic) sense. Numerical experiments demonstrate effectiveness of PG-EXTRA and validate our convergence results.
Abstract-Existing approaches to online convex optimization (OCO) make sequential one-slot-ahead decisions, which lead to (possibly adversarial) losses that drive subsequent decision iterates. Their performance is evaluated by the so-called regret that measures the difference of losses between the online solution and the best yet fixed overall solution in hindsight. The present paper deals with online convex optimization involving adversarial loss functions and adversarial constraints, where the constraints are revealed after making decisions, and can be tolerable to instantaneous violations but must be satisfied in the long term. Performance of an online algorithm in this setting is assessed by: i) the difference of its losses relative to the best dynamic solution with one-slot-ahead information of the loss function and the constraint (that is here termed dynamic regret); and, ii) the accumulated amount of constraint violations (that is here termed dynamic fit). In this context, a modified online saddle-point (MOSP) scheme is developed, and proved to simultaneously yield sub-linear dynamic regret and fit, provided that the accumulated variations of perslot minimizers and constraints are sub-linearly growing with time. MOSP is also applied to the dynamic network resource allocation task, and it is compared with the well-known stochastic dual gradient method. Under various scenarios, numerical experiments demonstrate the performance gain of MOSP relative to the stateof-the-art.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.