2014
DOI: 10.1109/tsp.2014.2331615
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Penalty-Based Distributed Stochastic Convex Optimization

Abstract: In this work, we study the task of distributed optimization over a network of learners in which each learner possesses a convex cost function, a set of affine equality constraints, and a set of convex inequality constraints. We propose a fully-distributed adaptive diffusion algorithm based on penalty methods that allows the network to cooperatively optimize the global cost function, which is defined as the sum of the individual costs over the network, subject to all constraints. We show that when small constan… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
81
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 95 publications
(82 citation statements)
references
References 43 publications
1
81
0
Order By: Relevance
“…Since p j,t (t) is a feasible point satisfying Cp j,t (t) = 1 m and ∑ n j=1 y j,i = 1, (42) is equivalent to Cp i,t+1 (t) = 1 m , which indicates that the new iterator p i,t+1 (t) satisfies the equality constraints of the model (17).…”
Section: Lemma 3 (Conditions On Gradient Projection Matrix For Iteratmentioning
confidence: 99%
See 1 more Smart Citation
“…Since p j,t (t) is a feasible point satisfying Cp j,t (t) = 1 m and ∑ n j=1 y j,i = 1, (42) is equivalent to Cp i,t+1 (t) = 1 m , which indicates that the new iterator p i,t+1 (t) satisfies the equality constraints of the model (17).…”
Section: Lemma 3 (Conditions On Gradient Projection Matrix For Iteratmentioning
confidence: 99%
“…At this point, the new iterator p j,t+1 (t) also satisfies the inequality constraints of the optimization model (17).…”
Section: Lemma 3 (Conditions On Gradient Projection Matrix For Iteratmentioning
confidence: 99%
“…Further distributed algorithm for set constrained optimization was investigated in Bianchi and Jakubowicz [34] and Lou et al [35]. To work out the distributed optimization problems with 2 Complexity asynchronous step-sizes or inequality-equality constraints, distributed Lagrangian and penalty primal-dual subgradient algorithms were developed in Zhu and Martinez [36] and Towfic and Sayed [37]. Both of them were designed for function constrained problems.…”
Section: Introductionmentioning
confidence: 99%
“…Apart from [4], this algorithm has effectively been used for distributed Pareto optimization in various scenarios and applications such as in the works of [19]- [23].…”
Section: Introductionmentioning
confidence: 99%