2016
DOI: 10.1109/tac.2016.2525928
|View full text |Cite
|
Sign up to set email alerts
|

Online Distributed Convex Optimization on Dynamic Networks

Abstract: This paper presents a distributed optimization scheme over a network of agents in the presence of cost uncertainties and over switching communication topologies. Inspired by recent advances in distributed convex optimization, we propose a distributed algorithm based on a dual sub-gradient averaging. The objective of this algorithm is to minimize a cost function cooperatively.Furthermore, the algorithm changes the weights on the communication links in the network to adapt to varying reliability of neighboring a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

3
78
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 148 publications
(81 citation statements)
references
References 35 publications
3
78
0
Order By: Relevance
“…On the other hand, the work [10] extended the distributed dual averaging algorithm in [2] to online setting, and derived an O(1/ √ T ) average regret rate. The authors in [11] further applied the online distributed dual averaging algorithm to dynamic networks. The authors in [12] proposed an online distributed optimization that is based on mirror descent and established its convergence analysis results.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, the work [10] extended the distributed dual averaging algorithm in [2] to online setting, and derived an O(1/ √ T ) average regret rate. The authors in [11] further applied the online distributed dual averaging algorithm to dynamic networks. The authors in [12] proposed an online distributed optimization that is based on mirror descent and established its convergence analysis results.…”
Section: Introductionmentioning
confidence: 99%
“…The work in [16] extends these results to zeroth-order methods. Unconstrained distributed online gradient descent algorithms are studied in [17]- [20]. These distributed methods deal with unconstrained problems and still achieve sublinear regret rates provided the stepsizes are chosen appropriately and the network of agents is connected.…”
Section: Introductionmentioning
confidence: 99%
“…In the OCO literature, a benchmark to evaluate a strategy is provided by the regret [29,19,36,18,23,21,28], which basically measures the difference between the player's sequence of decisions and the best strategy in hindsight (i.e., the minimization of each f t ). A strategy is defined successful if its regret is sublinear in T .…”
Section: Introductionmentioning
confidence: 99%
“…This result was improved in [23], which obtained O(1 + C T ), where C T is the sum of the distances between successive reference points. Very recently, regret has been investigated also in OCO distributed settings [21,28].…”
Section: Introductionmentioning
confidence: 99%