2016
DOI: 10.1109/tsp.2015.2500887
|View full text |Cite
|
Sign up to set email alerts
|

Randomized Algorithms for Distributed Nonlinear Optimization Under Sparsity Constraints

Abstract: Distributed optimization in multi-agent systems under sparsity constraints has recently received a lot of attention. In this paper, we consider the in-network minimization of a continuously differentiable nonlinear function which is a combination of local agent objective functions subject to sparsity constraints on the variables. A crucial issue of in-network optimization is the handling of the communications, which may be expensive. This calls for efficient algorithms, that are able to reduce the number of re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 43 publications
0
10
0
Order By: Relevance
“…The proof can be obtained with similar techniques, devised in [ 51 ], and we omit the proof for brevity. This result provides a necessary condition for optimality and shows that, being the function in ( 12 ) not convex, τ -stationarity points are only local minima.…”
Section: Proposed Iterative Methods and Main Resultsmentioning
confidence: 99%
“…The proof can be obtained with similar techniques, devised in [ 51 ], and we omit the proof for brevity. This result provides a necessary condition for optimality and shows that, being the function in ( 12 ) not convex, τ -stationarity points are only local minima.…”
Section: Proposed Iterative Methods and Main Resultsmentioning
confidence: 99%
“…Despite the high communication cost, the per-iteration computation complexity of (39) is also high, i.e., at O(d log d) [41]. We notice that [7,8] have considered distributed sparse recovery algorithm with focus on the communication efficiency. However, their algorithms are based on the iterative hard thresholding (IHT) formulation [52] that requires a-priori knowledge on the sparsity level of θ true .…”
Section: Example Ii: Communication Efficient Defw For Lassomentioning
confidence: 99%
“…The centralized FW algorithm for both losses will also be compared (cf. (7)); as well as the decentralized algorithm in [45] (labeled as 'Qing et al') and the DPG algorithm [15] with step size set to α t = 0.1N/( √ t + 1) applied to square loss. Our first example considers the noiseless synthetic dataset of problem dimension m 1 = 100, m 2 = 250, K = 5.…”
Section: Decentralized Matrix Completionmentioning
confidence: 99%
See 2 more Smart Citations