2012
DOI: 10.1002/rnc.2856
|View full text |Cite
|
Sign up to set email alerts
|

Distributed primal–dual stochastic subgradient algorithms for multi‐agent optimization under inequality constraints

Abstract: SUMMARY We consider the multi‐agent optimization problem where multiple agents try to cooperatively optimize the sum of their local convex objective functions, subject to global inequality constraints and a convex constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the associated Lagrangian function, which can be evaluated with stochastic errors, we propose the distributed primal–dual stochastic subgradient algorithms for two cases: (i) the time m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
22
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…The main idea is to estimate the optimal point by a consensus term plus a negative gradient term with deterministic or randomized iteration . To date, many researchers apply the consensus‐based distributed optimization algorithm to solve the problem and present a lot of results . However, the basis consensus‐based subgradient algorithms need to select the diminishing step sizes, which lead to the slow convergence rate.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The main idea is to estimate the optimal point by a consensus term plus a negative gradient term with deterministic or randomized iteration . To date, many researchers apply the consensus‐based distributed optimization algorithm to solve the problem and present a lot of results . However, the basis consensus‐based subgradient algorithms need to select the diminishing step sizes, which lead to the slow convergence rate.…”
Section: Introductionmentioning
confidence: 99%
“…3,7 To date, many researchers apply the consensus-based distributed optimization algorithm to solve the problem (1) and present a lot of results. [8][9][10][11] However, the basis consensus-based subgradient algorithms need to select the diminishing step sizes, which lead to the slow convergence rate. To overcome the drawbacks caused by the diminishing step sizes, some novel distributed optimization algorithms based on auxiliary-variables method are developed.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the problem of minimizing a sum of convex objective functions distributed among multiple agents over a network has attracted considerable attention, where each local objective function is known only to one agent in the network, and therefore, cooperation between agents is needed to reach a global objective . In contrast to the centralized optimization algorithm, a distributed algorithm has to take into account the communication and coordination in the process of optimization.…”
Section: Introductionmentioning
confidence: 99%
“…This paper focuses on another cooperation approach based on average consensus . In this case, each agent maintains its own estimate.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation