2014
DOI: 10.1109/tac.2014.2298712
|View full text |Cite
|
Sign up to set email alerts
|

Fast Distributed Gradient Methods

Abstract: Abstract-We study distributed optimization problems when N nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant L), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications K and the per-node gradient evaluations k. Our first method, Distributed Nesterov Gradien… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

8
542
0
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 574 publications
(551 citation statements)
references
References 30 publications
8
542
0
1
Order By: Relevance
“…4 we have shown that the sequence (V ) ∈N is converging. Our remaining task, which is the main result of this paper, is to demonstrate that the limit of the sequence (V ) ∈N is equal to the optimal value of the centralized cost function (7).…”
Section: B Convergence Of Algorithmmentioning
confidence: 71%
See 1 more Smart Citation
“…4 we have shown that the sequence (V ) ∈N is converging. Our remaining task, which is the main result of this paper, is to demonstrate that the limit of the sequence (V ) ∈N is equal to the optimal value of the centralized cost function (7).…”
Section: B Convergence Of Algorithmmentioning
confidence: 71%
“…Recent work in the field of multi-agent systems, particularly in the field of consensus, has seen a resurgence of interest in the area of distributed optimization; see for example [3], [4], [5], [6], [7], [8], [9] and the references therein. Much of this work assumes the existence of a global cost function decomposable into the sum of cost functions for each agent.…”
Section: Introductionmentioning
confidence: 99%
“…We establish the convergence rates of the expected optimality gap in the cost function (at any node ) of mD-NG and mD-NC, in terms of the number of per node gradient evaluations and the number of per-node communications , when the functions are convex and differentiable, with Lipschitz continuous and bounded gradients. We show that the modified methods achieve in expectation the same rates that the methods in [3] achieve on static networks, namely: mD-NG converges at rates and , while mD-NC has rates and , where is an arbitrarily small positive number. We explicitly give the convergence rate constants in terms of the number of nodes and the network statistics, more precisely, in terms of the quantity (See ahead paragraph with heading Notation.…”
Section: B Contributionsmentioning
confidence: 74%
“…For problem (1), [3], see also [14], [15], presents two distributed Nesterov-like gradient algorithms for static (non-random) networks, referred to as D-NG (Distributed Nesterov Gradient algorithm) and D-NC (Distributed Nesterov gradient with Consensus iterations).…”
Section: B Contributionsmentioning
confidence: 99%
See 1 more Smart Citation