2019 IEEE 58th Conference on Decision and Control (CDC) 2019
DOI: 10.1109/cdc40024.2019.9029852
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Online Learning over Time-varying Graphs via Proximal Gradient Descent

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…Remark 3. Recent studies on distributed online convex optimization have shown that the upper bound on the dynamic regret can be as tight as O(1 + C T + V T ) or O(log T (1 + C T )) when the loss functions are strongly convex and smooth (Zhang et al, 2020;Dixit et al, 2019). Theorem 3 shows that by employing multiple consensus averaging iterations over both local decisions and local gradients, DOMD-MADGC can improve the dynamic regret bound to O(1+C T ).…”
Section: Improved Dynamic Regretmentioning
confidence: 96%
See 2 more Smart Citations
“…Remark 3. Recent studies on distributed online convex optimization have shown that the upper bound on the dynamic regret can be as tight as O(1 + C T + V T ) or O(log T (1 + C T )) when the loss functions are strongly convex and smooth (Zhang et al, 2020;Dixit et al, 2019). Theorem 3 shows that by employing multiple consensus averaging iterations over both local decisions and local gradients, DOMD-MADGC can improve the dynamic regret bound to O(1+C T ).…”
Section: Improved Dynamic Regretmentioning
confidence: 96%
“…A distributed online gradient tracking algorithm is proposed in (Lu et al, 2019), which has a dynamic regret bound of O( √ 1 + C T T 3/4 √ ln T ). The dynamic regret of distributed online proximal gradient descent with an O(log t) communication steps per round is bounded by O(log T (1 + C T )) (Dixit et al, 2019). Using the gradient variation as the regularity measure (Li et al, 2020), which is improved to O(1 + C T + V ′ T ) in (Zhang et al, 2020), where V ′ T is a variant of V T in which gradients are evaluated at the optimal points.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…). [26] considered time-varying network structures and showed that distributed proximal OGD achieves a dynamic regret of O(log T (1 + C * T )) for strongly convex functions. [27] developed a method based on gradient tracking and derived a regret bound in terms of C * T and a gradient pathlength.…”
Section: B Contributionsmentioning
confidence: 99%
“…Additional dynamic regret bounds have also been derived for centralized OCO, e.g., Mokhtari et al (2016); Zhang et al (2017a); Besbes et al (2015). In distributed implementation, several recent works have proposed methods that provide dynamic regret guarantee under various assumptions on the convexity and smoothness of the objective functions (Shahrampour and Jadbabaie (2018); Zhang et al (2019); Dixit et al (2019); Lu et al (2020); Sharma et al (2020); Eshraghi and Liang (2020); ; Li et al (2021)). To the best of our knowledge, for gradient/projection-based distributed OCO, the tightest known dynamic regret bound for general convex cost functions is O( T (1 + P T )) (Shahrampour and Jadbabaie (2018)).…”
Section: Dynamic Regret Of Gradient-based and Projection-based Ocomentioning
confidence: 99%