2016
DOI: 10.1109/tcyb.2015.2464255
|View full text |Cite
|
Sign up to set email alerts
|

Regularized Primal–Dual Subgradient Method for Distributed Constrained Optimization

Abstract: In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
67
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 137 publications
(68 citation statements)
references
References 30 publications
1
67
0
Order By: Relevance
“…The problem (P1) is a nonlinear and nonconvex optimization problem, which is difficult to find an existing decentralized or distributed method to solve it. Because most of the existing decentralized or distributed methods [18]- [20], [33]- [35], such as distributed primal-dual subgradient methods [18], dual decomposition [19], distributed ADMM [20] are developed for convex optimization problems and some of them can only tackle linear constraints.…”
Section: E the Optimization Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…The problem (P1) is a nonlinear and nonconvex optimization problem, which is difficult to find an existing decentralized or distributed method to solve it. Because most of the existing decentralized or distributed methods [18]- [20], [33]- [35], such as distributed primal-dual subgradient methods [18], dual decomposition [19], distributed ADMM [20] are developed for convex optimization problems and some of them can only tackle linear constraints.…”
Section: E the Optimization Problemmentioning
confidence: 99%
“…More specifically, both the system dynamics and the global objective function of the problem are nonlinear and nonconvex. Most of the existing decentralized or distributed methods, such as distributed primal-dual subgradient methods [18], dual decomposition [19], distributed Alternating Direction Method of Multipliers (ADMM) [20] can't be directly applied to solve the problem. Because these decentralized or distributed methods are generally established for convex problems and some of them even can only accommodate linear constraints.…”
Section: Introductionmentioning
confidence: 99%
“…Lemma 2: Let x(t) be a solution of different inclusion (3), and let f : R n → R be a locally Lipschitz continuous and regular function [26]. Then d dt f (x(t)) exists and d dt f (x(t)) ∈L F f (x) almost everywhere.…”
Section: Definitionmentioning
confidence: 99%
“…Under the influence of big data and large-scale systems, distributed optimization and computation have attracted more and more research attention. Both discrete-time algorithms [1]- [3] and continuous-time algorithms [4]- [6] have been given for various distributed optimization problems. The basic idea is that many interconnected agents in a network, having local information W. Deng and Y. Hong are with the School of Mathematical Sciences, University of Chinese Academy of Sciences; Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, information.…”
Section: Introductionmentioning
confidence: 99%
“…Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of a neural network.Another relevant class of algorithms is that of distributed primal-dual methods (see, e.g. [6,7,8]). Within this framework, an iterative scheme combining dual decomposition and proximal minimization is introduced in [9].…”
mentioning
confidence: 99%