2008 47th IEEE Conference on Decision and Control 2008
DOI: 10.1109/cdc.2008.4738860
|View full text |Cite
|
Sign up to set email alerts
|

Distributed subgradient methods and quantization effects

Abstract: Abstract-We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed subgradient methods that can operate over a timevarying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
138
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 178 publications
(138 citation statements)
references
References 21 publications
0
138
0
Order By: Relevance
“…Distributed consensus-based algorithms have been studied in [19], [18], [21], [26], [16], [20], which rely on deterministic consensus schemes, except for [16] where a random consensus scheme is considered. These algorithms have the following limitations in common:…”
Section: Introductionmentioning
confidence: 99%
“…Distributed consensus-based algorithms have been studied in [19], [18], [21], [26], [16], [20], which rely on deterministic consensus schemes, except for [16] where a random consensus scheme is considered. These algorithms have the following limitations in common:…”
Section: Introductionmentioning
confidence: 99%
“…On a broader basis, the algorithm in this paper is related to the distributed (deterministic) consensus-based optimization algorithm proposed in [22], [23] and further studied in [16], [18], [21], [26], [28]. That algorithm is requires the agents to update simultaneously and to coordinate their stepsize choices, which is in contrast with the algorithm discussed in this paper.…”
Section: Introductionmentioning
confidence: 72%
“…More recently, the authors of [16] consider a variant of incremental gradient methods [21] over networks where each node projects its iterate to a grid before sending the iterate to the next node. Similar quantization ideas are considered [17]- [19] in the context of consensus-type subgradient methods [22]. The work in [20] studies the convergence of standard interference function methods for power control in cellular wireless systems where base stations send binary signals to the users optimizing the transmit radio power.…”
Section: A Related Literaturementioning
confidence: 99%
“…, we obtain the bound in Equation (17). The optimal step-size γ = 2 /(Lα 2 BN 3/2 ) comes by maximizing the denominator in Equation (17).…”
Section: A Constant Step-sizementioning
confidence: 99%