2021
DOI: 10.1002/asjc.2721
|View full text |Cite
|
Sign up to set email alerts
|

An accelerated distributed online gradient push‐sum algorithm on time‐varying directed networks

Abstract: This paper investigates a distributed online optimization problem with convex objective functions on time-varying directed networks, where each agent holds its own convex cost function and the goal is to cooperatively minimize the sum of the local cost functions. To tackle such optimization problems, an accelerated distributed online gradient push-sum algorithm is firstly proposed, which combines the momentum acceleration technique and the push-sum strategy. Then, we specifically analyze the regret for the pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…which is a perturbed system with nonvanishing perturbation 𝜈 ′ (t). Similar to the analysis of system (12), the derivative of V along the trajectories of perturbed system (18) satisfies…”
Section: Lemma 1 Suppose That Assumption 3 Holds Then Linear Eso (13)...mentioning
confidence: 99%
See 1 more Smart Citation
“…which is a perturbed system with nonvanishing perturbation 𝜈 ′ (t). Similar to the analysis of system (12), the derivative of V along the trajectories of perturbed system (18) satisfies…”
Section: Lemma 1 Suppose That Assumption 3 Holds Then Linear Eso (13)...mentioning
confidence: 99%
“…In recent years, the results of asymptotic and exponential convergence of distributed resource allocation algorithms have been obtained for agents communicating on undirected graphs [9][10][11][12][13][14][15]. And the related results were also extended to weight-(un)balanced directed graphs (digraphs) and switching networks [16][17][18][19][20]. In addition to the asymptotic and exponential convergence results, recent work dealt with finite-time and predefine-time convergence of distributed algorithms [21][22][23].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, some accelerated and improved methods, such as Nesterov accelerated gradient method and adaptive gradient methods, have been leveraged to further improve the performance. For example, the momentum acceleration technique was exploited to improve the performance under time-varying unbalanced communication graphs in [128], where an improved static regret bound O( √ 1 + log T + √ T ) is established for convex cost functions. Also, adaptive gradient methods have been integrated into DOL in [129]- [131].…”
Section: Metricmentioning
confidence: 99%