We consider distributed optimization problems in which a number of agents are to seek the optimum of a global objective function through merely local information sharing. The problem arises in various application domains, such as resource allocation, sensor fusion and distributed learning. In particular, we are interested in scenarios where agents use uncoordinated (different) constant stepsizes for local optimization. According to most existing works, using this kind of stepsize rule for update, which is necessary in asynchronous scenarios, will lead to some gap (error) between the estimated result and the exact optimum. To deal with this issue, we develop a new augmented distributed gradient method (termed Aug-DGM) based on consensus theory. The proposed algorithm not only allows for using uncoordinated stepsizes but also, most importantly, be able to seek the exact optimum even with constant stepsizes assuming that the global objective function has Lipschitz gradient. A simple numerical example is provided to illustrate the effectiveness of the algorithm.
In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider new distributed gradient-based methods where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the gradients is pushed to the neighbors, while the information about the decision variable is pulled from the neighbors hence giving the name "push-pull gradient methods". This name is also due to the consideration of the implementation aspect: the push-communication-protocol and the pull-communication-protocol are respectively employed to implement certain steps in the numerical schemes. The methods utilize two different graphs for the information exchange among agents, and as such, unify the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the proposed algorithms and their many variants converge linearly for strongly convex and smooth objective functions over a network (possibly with unidirectional data links) in both synchronous and asynchronous random-gossip settings. We numerically evaluate our proposed algorithm for both static and time-varying graphs, and find that the algorithms are competitive as compared to other linearly convergent schemes. ). 1 applications in learning can be found. Parallel, coordinated, and asynchronous algorithms were discussed in [20] and the references therein. The reader is also referred to the recent paper [15] and the references therein for a comprehensive survey on distributed optimization algorithms.In the first part of this paper, we introduce a novel gradient-based algorithm (Push-Pull) for distributed (consensus-based) optimization in directed graphs. Unlike the push-sum type protocol used in the previous literature [16,36], our algorithm uses a row stochastic matrix for the mixing of the decision variables, while it employs a column stochastic matrix for tracking the average gradients. Although motivated by a fully decentralized scheme, we show that Push-Pull can work both in fully decentralized networks and in two-tier networks.Gossip-based communication protocols are popular choices for distributed computation due to their low communication costs [1,10,8,11]. In the second part of this paper, we consider a random-gossip push-pull algorithm (G-Push-Pull) where at each iteration, an agent wakes up uniformly randomly and communicates with one or two of its neighbors. Both Push-Pull and G-Push-Pull have different variants. We show that they all converge linearly to the optimal solution for strongly convex and smooth objective functions.
We present calculations of magnetic exchange interactions and critical temperature T c in Ga 1−x Mn x As, Ga 1−x Cr x As and Ga 1−x Cr x N. The local spin density approximation is combined with a linear-response technique to map the magnetic energy onto a Heisenberg hamiltonion, but no significant further approximations are made. Special quasi-random structures in large unit cells are used to accurately model the disorder. T c is computed using both a spin-dynamics approach and the cluster variation method developed for the classical Heisenberg model.We show the following: (i) configurational disorder results in large dispersions in the pairwise exchange interactions; (ii) the disorder strongly reduces T c ; (iii) clustering in the magnetic atoms, whose tendency is predicted from total-energy considerations, further reduces T c . Additionally the exchange interactions J(R) are found to decay exponentially with distance R 3 on average; and the mean-field approximation is found to be a very poor predictor of T c , particularly when J(R) decays rapidly. Finally the effect of spin-orbit coupling on T c is considered. With all these factors taken into account, T c is reasonably predicted by the local spin-density approximation in MnGaAs without the need to invoke compensation by donor impurities.
In this paper, we focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents' cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents' objective functions. From the viewpoint of an agent, the information about the decision variable is pushed to the neighbors, while the information about the gradients is pulled from the neighbors (hence giving the name "push-pull gradient method"). The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for timevarying directed networks. This is a preliminary version of the paper [1]. 1 The condition number of a smooth and strongly convex function is the ratio of its gradient Lipschitz constant and its strong convexity constant.2 Constructing a doubly stochastic matrix over a directed graph needs weight balancing which requires an independent iterative procedure across the network; consensus is a basic element in decentralized optimization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.