2010
DOI: 10.1137/08073038x
|View full text |Cite
|
Sign up to set email alerts
|

A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems

Abstract: We present an algorithm that generalizes the randomized incremental subgradient method with fixed stepsize due to Nedić and Bertsekas [SIAM J. Optim., 12 (2001), pp. 109-138]. Our novel algorithm is particularly suitable for distributed implementation and execution, and possible applications include distributed optimization, e.g., parameter estimation in networks of tiny wireless sensors. The stochastic component in the algorithm is described by a Markov chain, which can be constructed in a distributed fashion… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
248
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 297 publications
(249 citation statements)
references
References 8 publications
1
248
0
Order By: Relevance
“…Note that the equilibria of Ψ α-dis-opt are precisely the set of saddle points of F in (5). Let x * = 1 n ⊗x * be a solution of (4).…”
Section: Lemma 53mentioning
confidence: 99%
See 1 more Smart Citation
“…Note that the equilibria of Ψ α-dis-opt are precisely the set of saddle points of F in (5). Let x * = 1 n ⊗x * be a solution of (4).…”
Section: Lemma 53mentioning
confidence: 99%
“…• Remark 5.7 (Discrete-time counterpart of (6) and (11)): It is worth noticing that the discretization of (6) for undirected graphs (performed in [12] for the case of continuously differentiable, strictly convex functions) and (11) for weight-balanced digraphs gives rise to different discretetime optimization algorithms from the ones considered in [1], [2], [3], [4], [5], [6].…”
Section: Lemma 53mentioning
confidence: 99%
“…For this reason, we consider two classes of decentralized optimization methods which can naturally be implemented and analyzed in an asynchronous framework. The two concrete algorithms we consider are DDA [6] and MIGD [10].…”
Section: Framework and Algorithmsmentioning
confidence: 99%
“…Nedic et al [8] and Bert-sekas [9] analyze the situation where every component f i (·) has to be visited once in every cycle, essentially assuming that the nodes are connected as a complete graph. Johansson et al [10] generalize this approach for any connected graph in the Markov incremental gradient descent (MIGD) algorithm by having the task take a random walk on the graph. In contrast to consensus methods, all J tasks can be solved concurrently using MIGD by having each task take a random walk.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation