2015
DOI: 10.48550/arxiv.1511.08905
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stochastic Proximal Gradient Consensus Over Random Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
1

Year Published

2016
2016
2018
2018

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(15 citation statements)
references
References 36 publications
0
14
1
Order By: Relevance
“…where the authors also develop a distributed algorithm based on linearized ADMM for solving (3) over both random and static networks and they attain similar rate results to ours. For the static network setting, their algorithm achieves O(1/t) rate using deterministic gradient and O(1/ √ t) rate using the stochastic gradient; however, in contrast to our results, these rates are established assuming bounded domain for all ξ i (for both deterministic and stochastic gradient settings); explicit bounds for suboptimality and infeasibility are not separately provided; and when the gradient is noisy, their algorithm does not have a compact characterization using only primal local decisions (see Theorem 4.2 and Algorithm 1 in [26]) -even if the network is static, in case the gradient is noisy, one needs to use Algorithm 1 which requires updating edge-variables and explicitly computing the dual variables, while our algorithm SDPGA using stochastic gradient is in a compact form updating only primal node-variables, does not explicitly compute the dual iterates and still achieves O(1/ √ t) rate without assuming compact domain for any ξ i .…”
Section: B Related Workcontrasting
confidence: 78%
See 2 more Smart Citations
“…where the authors also develop a distributed algorithm based on linearized ADMM for solving (3) over both random and static networks and they attain similar rate results to ours. For the static network setting, their algorithm achieves O(1/t) rate using deterministic gradient and O(1/ √ t) rate using the stochastic gradient; however, in contrast to our results, these rates are established assuming bounded domain for all ξ i (for both deterministic and stochastic gradient settings); explicit bounds for suboptimality and infeasibility are not separately provided; and when the gradient is noisy, their algorithm does not have a compact characterization using only primal local decisions (see Theorem 4.2 and Algorithm 1 in [26]) -even if the network is static, in case the gradient is noisy, one needs to use Algorithm 1 which requires updating edge-variables and explicitly computing the dual variables, while our algorithm SDPGA using stochastic gradient is in a compact form updating only primal node-variables, does not explicitly compute the dual iterates and still achieves O(1/ √ t) rate without assuming compact domain for any ξ i .…”
Section: B Related Workcontrasting
confidence: 78%
“…We will illustrate their practical performance in Section IV. After we started writing this paper, we became aware of other recent work [2], [24]- [26] for solving (3) over a connected graph G. These methods are very closely related to our proximal gradient ADMM (PG-ADMM), and are based on linearized ADMM method. Suppose Φ i (x) = ξ i (x) + f i (A i x).…”
Section: B Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Until now, we have rewritten the primal-dual algorithm (9) as a fixed-point iteration (27) with operator T defined in (26). Besides, we also established that FixT coincide with the solutions to the KKT system (23). What still needs to be proved is that {Z k } generated through the fixed-point iteration (27) will converge to FixT .…”
Section: B Convergence Of Algorithmmentioning
confidence: 92%
“…Corollary 1. Under Assumption 1, Algorithm 1 produces Z k that converges to a solution Z * = [X * ; Y * ] to the KKT system (23), which is also a saddle point to Problem (8). Therefore, X * is a solution to Problem (7).…”
Section: B Convergence Of Algorithmmentioning
confidence: 99%