2017
DOI: 10.1016/j.sysconle.2017.07.005
|View full text |Cite
|
Sign up to set email alerts
|

A fast proximal gradient algorithm for decentralized composite optimization over directed networks

Abstract: This paper proposes a fast decentralized algorithm for solving a consensus optimization problem defined in a directed networked multi-agent system, where the local objective functions have the smooth+nonsmooth composite form, and are possibly nonconvex. Examples of such problems include decentralized compressed sensing and constrained quadratic programming problems, as well as many decentralized regularization problems. We extend the existing algorithms PG-EXTRA and ExtraPush to a new algorithm PG-ExtraPush fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2018
2018
2025
2025

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(16 citation statements)
references
References 26 publications
0
16
0
Order By: Relevance
“…where the last inequality follows from the fact that K has finite diameter D K . Combining the inequalities in (16) and (18), we get…”
Section: Appendix a Proof Of Lemmamentioning
confidence: 97%
See 2 more Smart Citations
“…where the last inequality follows from the fact that K has finite diameter D K . Combining the inequalities in (16) and (18), we get…”
Section: Appendix a Proof Of Lemmamentioning
confidence: 97%
“…The works [27]- [29] developed a class of D. Yuan distributed optimization algorithms that are built on mirror descent, which generalize the projection step by using the Bregman divergence. Different from the aforementioned works that deal only with non-composite objective functions, the authors in [16], [31] considered a decentralized composite optimization problem where the local objective function of every node is composed of a smooth function and a nonsmooth regularizer. This problem naturally arises in many real applications including distributed estimation in sensor networks [4], [10], distributed quadratic programming [31], and distributed machine learning [30], [32], to name a few.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A different approach is employed by PG-EXTRA [20], which combines the proximal gradient method with a gradient tracking scheme. PG-ExtraPush, proposed in [24], further combines PG-EXTRA with a push-sum (or ratio) consensus scheme. A drawback of [20], and [24] is that in order to choose the step size, the agents need to compute the minimum eigenvalue of the consensus matrix.…”
Section: Introductionmentioning
confidence: 99%
“…PG-ExtraPush, proposed in [24], further combines PG-EXTRA with a push-sum (or ratio) consensus scheme. A drawback of [20], and [24] is that in order to choose the step size, the agents need to compute the minimum eigenvalue of the consensus matrix. To resolve this, a modification of PG-EXTRA, called NIDS, was proposed in [21], which allows each node to choose the step-size independently (provided that there is agreement on an auxiliary parameter, that however does not depend on the topology of the network).…”
Section: Introductionmentioning
confidence: 99%