52nd IEEE Conference on Decision and Control 2013
DOI: 10.1109/cdc.2013.6760092
|View full text |Cite
|
Sign up to set email alerts
|

Online distributed optimization via dual averaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

3
89
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 96 publications
(92 citation statements)
references
References 15 publications
3
89
0
Order By: Relevance
“…Moreover, Rabbat in [16] proposes a decentralized mirror descent for stochastic composite optimization problems and provide guarantees for strongly convex regularizers. Duchi et al [17] study dual averaging for distributed optimization, and the extension of dual averaging to online distributed optimization is considered in [18]. Mateos-Núnez and Cortés [19] consider online optimization using subgradient descent of local functions, where the graph structure is time-varying.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, Rabbat in [16] proposes a decentralized mirror descent for stochastic composite optimization problems and provide guarantees for strongly convex regularizers. Duchi et al [17] study dual averaging for distributed optimization, and the extension of dual averaging to online distributed optimization is considered in [18]. Mateos-Núnez and Cortés [19] consider online optimization using subgradient descent of local functions, where the graph structure is time-varying.…”
Section: Introductionmentioning
confidence: 99%
“…In this section, we obtain sublinear bounds for the regret and constraint violation defined in (8) and (10), respectively. The next proposition proves sublinearly bounded regret in the cost function.…”
Section: Resultsmentioning
confidence: 99%
“…The consensus framework (A) was used in [8], [9], where a variant of the dual-averaging method by Nesterov [10] for online distributed optimization was proposed for undirected networks [8] and for time-invariant digraphs allowing timevarying weights [9]. Other recent work includes [11], [12] that employ the push-sum protocol [13], [14], which allows for time-varying weight-imbalanced digraphs.…”
mentioning
confidence: 99%
“…It is worth emphasizing that the objective functions considered in the aforementioned works are time-invariant. However, in many real applications the objective functions change over time, due to the dynamically changing and uncertain nature of the environment, taking the distributed estimation in sensor networks as an example [10]. Online optimization is known as a powerful tool that can deal with time-varying cost functions that satisfy certain properties (see, e.g., [5]- [9]).…”
Section: Introductionmentioning
confidence: 99%
“…In the work [13], the authors developed a distributed autonomous online learning algorithm that is based on computing local subgradients, and they derived an O(ln T /T ) average regret rate for strongly convex cost functions. On the other hand, the work [10] extended the distributed dual averaging algorithm in [2] to online setting, and derived an O(1/ √ T ) average regret rate. The authors in [11] further applied the online distributed dual averaging algorithm to dynamic networks.…”
Section: Introductionmentioning
confidence: 99%