2019
DOI: 10.1109/lsp.2019.2925537
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Nesterov Gradient Methods Over Arbitrary Graphs

Abstract: In this letter, we introduce a distributed Nesterov method, termed as ABN , that does not require doubly-stochastic weight matrices. Instead, the implementation is based on a simultaneous application of both row-and column-stochastic weights that makes this method applicable to arbitrary (stronglyconnected) graphs. Since constructing column-stochastic weights needs additional information (the number of outgoing neighbors at each agent), not available in certain communication protocols, we derive a variation, t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(41 citation statements)
references
References 47 publications
0
41
0
Order By: Relevance
“…where y k ∈ ℝ n is a variable and is corrected in the next update. Refer to [31], this method removes the gaps between the lower oracle complexity bounds of the function class. Furthermore, the Nesterov method corrects the gradient in every iteration and makes the update of the gradient more flexible, in some way, which can accelerate the convergence.…”
Section: The Nesterov Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…where y k ∈ ℝ n is a variable and is corrected in the next update. Refer to [31], this method removes the gaps between the lower oracle complexity bounds of the function class. Furthermore, the Nesterov method corrects the gradient in every iteration and makes the update of the gradient more flexible, in some way, which can accelerate the convergence.…”
Section: The Nesterov Methodsmentioning
confidence: 99%
“…, referring to [26][27][28][29][30][31]39]. When dealing with the distributed optimisation problem over time-varying directed networks, TV- algorithm converges linearly.…”
Section: The Tv- Algorithmmentioning
confidence: 99%
“…Nesterov-type acceleration was first proposed in [29] by Nesterov which was originally aimed at accelerating gradient descent-type (first-order) methods. Recently, this technique has been extended to wider applications, such as ADMM-based algorithms [30], gradient tracking-based algorithms [31]. Inspired by these works, the acceleration version of RDDGT with Nesterov momentum named RDDGT-N can be derived where β is the momentum parameter and the x-update is changed to a two-step process:…”
Section: Algorithm Accelerationmentioning
confidence: 99%
“…Saadatniaki, Xin and Khan [28] propose a variant of AB/push-pull method with time-varying weight matrices and show that the proposed method converges linearly to the optimal solution when each local objective is smooth and the global objective is strongly-convex. Accelerate techniques have also been incorporated into AB/push-pull method, such as, Xin and Khan [29] employ the heavyball method to accelerate the AB/push-pull method where the R-linear rate for strongly convex smooth objective has been obtained; Xin, Jakoveti and Khan [30] combine nesterov gradient method with AB/push-pull method and show that the new method achieves robust numerical performance for both convex and strongly-convex objectives. Moreover, the extended versions of AB/push-pull method have been used to solve practical problem, such as, resource allocation [31] and the distributed multi-cluster game [32].…”
Section: Introductionmentioning
confidence: 99%