2019
DOI: 10.1016/j.sigpro.2018.10.010
|View full text |Cite
|
Sign up to set email alerts
|

Estimate exchange over network is good for distributed hard thresholding pursuit

Abstract: We investigate an existing distributed algorithm for learning sparse signals or data over networks. The algorithm is iterative and exchanges intermediate estimates of a sparse signal over a network. This learning strategy using exchange of intermediate estimates over the network requires a limited communication overhead for information transmission. Our objective in this article is to show that the strategy is good for learning in spite of limited communication. In pursuit of this objective, we first provide a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 46 publications
0
6
0
Order By: Relevance
“…The pruned BPDN (pNBPDN) is more close to distributed greedy algorithms that work with the same system setup. Examples of the greedy algorithms are network greedy pursuit (NGP) [22] and distributed hard thresholding pursuit (DHTP) [31]. We mention that NGP and DHTP have RIP conditions δ 3s (A l ) < 0.362 and δ 3s (A l ) < 0.333, respectively.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The pruned BPDN (pNBPDN) is more close to distributed greedy algorithms that work with the same system setup. Examples of the greedy algorithms are network greedy pursuit (NGP) [22] and distributed hard thresholding pursuit (DHTP) [31]. We mention that NGP and DHTP have RIP conditions δ 3s (A l ) < 0.362 and δ 3s (A l ) < 0.333, respectively.…”
Section: Discussionmentioning
confidence: 99%
“…Our algorithms are locally optimum at each node of the network. This endeavor is an extension of past work in designing distributed greedy algorithms [31], [22] to the regime of convex optimization.…”
Section: B Literature Reviewmentioning
confidence: 99%
“…Essentially, the model ( 1) is to find the best k-term approximation of the target signal, which best fits the acquired measurements. Similar models also arise in numerous areas such as statistical learning [5]- [7], wireless communication [8], [9], low-rank matrix recovery [10]- [14], linear inverse and optimization problems [15]- [18]. While the SCO and related problems can be solved via convex optimization, nonconvex optimization and orthogonal matching pursuit (OMP) (see, e.g., [1]- [4]), the thresholding methods with low computational complexity are particularly suited for solving the SCO model.…”
Section: Introductionmentioning
confidence: 95%
“…Proof. (i) Under ( 17), Theorem 3.4 claims that the sequence {x (p) : p ≥ 1}, generated by NT and NT q with λ p ≡ 1 and concave g α (•), satisfies (18). Note that A T ν ′ 2 ≤ σ max (A) ν ′ 2 , where σ max (A) is the largest singular value of A.…”
Section: Guaranteed Performance and Stabilitymentioning
confidence: 99%
“…The hard thresholding methods have widely been studies in the area of compressed sensing or sparse approximation [5,6,7,29,30,4]. The latest development and applications of these methods can be found in [8,9,34,46,52,54,48]. Although the sparse optimization problems like (1) arising from compressed sensing are usually NP-hard [42], it does not prohibit the fast development of various computational methods for such problems.…”
Section: Introductionmentioning
confidence: 99%