2014
DOI: 10.1002/rnc.3164
|View full text |Cite
|
Sign up to set email alerts
|

Gradient‐free method for distributed multi‐agent optimization via push‐sum algorithms

Abstract: SUMMARYThis paper studies the problem of minimizing the sum of convex functions that all share a common global variable, each function is known by one specific agent in the network. The underlying network topology is modeled as a time-varying sequence of directed graphs, each of which is endowed with a nondoubly stochastic matrix. We present a distributed method that employs gradient-free oracles and push-sum algorithms for solving this optimization problem. We establish the convergence by showing that the met… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
54
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 61 publications
(54 citation statements)
references
References 26 publications
0
54
0
Order By: Relevance
“…In a typical setting of this problem, each agent is assigned with a local cost function and the control objective is to propose distributed controls that guarantee a consensus on the optimal solution of the sum of all local cost functions. Many effective algorithms have been proposed to achieve this goal in different situations …”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In a typical setting of this problem, each agent is assigned with a local cost function and the control objective is to propose distributed controls that guarantee a consensus on the optimal solution of the sum of all local cost functions. Many effective algorithms have been proposed to achieve this goal in different situations …”
Section: Introductionmentioning
confidence: 99%
“…Many effective algorithms have been proposed to achieve this goal in different situations. [4][5][6][7][8][9] Here, we follow this technical line but consider high-order continuous-time nonlinear agents with unknown dynamics. While most of the existing works were only devoted to single-integrator agents, there are many distributed optimization tasks implemented by or depending on physical plants of continuous dynamics in practice, eg, source seeking in multirobot systems, 10 attitude formation control of rigid bodies, 11 and optimal power dispatch over power networks.…”
Section: Introductionmentioning
confidence: 99%
“…To overcome the drawbacks caused by the diminishing step sizes, some novel distributed optimization algorithms based on auxiliary-variables method are developed. [12][13][14][15] The main feature of these algorithms is that the step sizes are fixed (or nonincreasing) so as to ensure fast and exact convergence. However, the price is the increasing of the computation and communication burden.…”
Section: Introductionmentioning
confidence: 99%
“…Generally speaking, distributed optimization problems aim to minimize a global objective function, which is a sum of local objective functions that are known only to individual agent, in the absence/presence of some constraints. To date, although a wide spectrum of results have been reported for discrete-time networks with various scenarios in the literature, ranging from distributed optimization problems in the absence of constraints to those subject to constraints, [1][2][3][4][5] continuous-time algorithms have attracted an increasing interest in recent years mostly due to the fact that a lot of physical systems operate in a continuum domain, such as the current flow in smart grid. [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] For instance, distributed convex optimization problems have been studied in the work of Yang et al 17 subject to local feasible constraints, local inequality and equality constraints, where a proportional-integral continuous-time algorithm has been designed with output information exchange.…”
Section: Introductionmentioning
confidence: 99%