2019 IEEE 58th Conference on Decision and Control (CDC) 2019
DOI: 10.1109/cdc40024.2019.9029248
|View full text |Cite
|
Sign up to set email alerts
|

Randomized Gradient-Free Distributed Online Optimization with Time-Varying Cost Functions

Abstract: This paper presents a randomized gradient-free distributed online optimization algorithm, with a group of agents whose local objective functions are time-varying. It is worth noting that the value of the local objective function is only revealed to the corresponding agent after the decision is made at each time-step. Thus, each agent updates the decision variable using the local objective function value of its last decision and the information collected from its immediate in-neighbors. A randomized gradient-fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(28 citation statements)
references
References 33 publications
0
26
0
Order By: Relevance
“…In fact, ĝi,t (x) is an unbiased gradient estimator of fi,t (x). Notably, Gaussian random variables are used to construct zeroth-order oracles in [28], [35], [41], which cannot be applied in our setting. The reason is that Gaussian random variables do not have finite support such that the perturbation x + µζ i,t may lie outside of Ω.…”
Section: Algorithm Developmentmentioning
confidence: 99%
See 3 more Smart Citations
“…In fact, ĝi,t (x) is an unbiased gradient estimator of fi,t (x). Notably, Gaussian random variables are used to construct zeroth-order oracles in [28], [35], [41], which cannot be applied in our setting. The reason is that Gaussian random variables do not have finite support such that the perturbation x + µζ i,t may lie outside of Ω.…”
Section: Algorithm Developmentmentioning
confidence: 99%
“…Extending distributed algorithms from weight balanced networks to general directed networks is non-trivial [19]- [26]. The authors of [27], [28] proposed distributed online optimization algorithms inspired by the push-sum based algorithm [19] and the surplus-based method [24], respectively. However, the former is incapable of tackling constrained optimization problems by combining with projected-based methods directly, and the latter involves a global parameter depending on weight matrices which should be known a priori.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…There is a limited but important body of work on dynamic and distributed OCO. These include work on mirror updates [18], [19], adaptive search directions [20], gradientfree methods [21] and time-varying constraints [22]. This paper extends this body of work by using a distributed weighted dual averaging update [1], [23] in the dynamic setting.…”
mentioning
confidence: 99%