2009
DOI: 10.1137/080726380
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Stochastic Subgradient Algorithms for Convex Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
167
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 212 publications
(168 citation statements)
references
References 21 publications
1
167
0
Order By: Relevance
“…[44], and Ram et al [46,47]. Incremental subgradient methods have convergence properties that are similar in many ways to their gradient counterparts, the most important similarity being the necessity of a diminishing stepsize α k for convergence.…”
Section: Introductionmentioning
confidence: 99%
“…[44], and Ram et al [46,47]. Incremental subgradient methods have convergence properties that are similar in many ways to their gradient counterparts, the most important similarity being the necessity of a diminishing stepsize α k for convergence.…”
Section: Introductionmentioning
confidence: 99%
“…Different from [1]- [5], a dual averaging subgradient algorithm was developed and analyzed for randomized graphs under the assumption that all agents remain in the same closed convex set in [6] and it was shown that the number of iterations were required by their algorithm scales inversely in the spectral gap of the network. Moreover, distributed optimization problems with asynchronous step-sizes or inequality-equality constraints or using other algorithms were studied in [7]- [12] and corresponding conditions were given to ensure the system converge to the optimal point or its neighborhood. However, as in [1]- [5], it was assumed in [6]- [12] that the state sets of agents to be identical or the objective function finally converge to only a neighborhood of the optimal set.…”
Section: Introductionmentioning
confidence: 99%
“…Distributed optimization problems of multi-agent systems appear different kinds of distributed processing issues such as distributed estimation, distributed motion planning, distributed resource allocation and distributed congestion control [1][2][3][4][5][6][7][8][9][10][11][12]. The main focus is to solve a distributed optimization problem where the global objective function is composed of a sum of local objective functions, each of which is only known by one agent.…”
Section: Introductionmentioning
confidence: 99%
“…6] in the context of real wireless sensor networks. There are several possible implementations mostly based on incremental gradients methods [12] which can be deterministic [13] or randomized [14], [15]. Important extensions include the use of projections in order to take into account possible different local constraints [16], and the analysis of the convergence rate and error bounds [17], [18].…”
Section: A Previous Workmentioning
confidence: 99%