2020
DOI: 10.1109/tsp.2020.2978615
|View full text |Cite
|
Sign up to set email alerts
|

Learning Proximal Operator Methods for Nonconvex Sparse Recovery with Theoretical Guarantee

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 30 publications
0
13
0
Order By: Relevance
“…(iii) Suppose that (x * , y * ) be an accumulation point of {x k , y k }. Then, there exists an vector z * such that (x * , y * , z * ) is an accumulation point of (x k , y k , z k ) and satisfied with (30). It further implies that x * is satisfied with (5) by takingx = x * .…”
Section: Global Convergencementioning
confidence: 97%
See 2 more Smart Citations
“…(iii) Suppose that (x * , y * ) be an accumulation point of {x k , y k }. Then, there exists an vector z * such that (x * , y * , z * ) is an accumulation point of (x k , y k , z k ) and satisfied with (30). It further implies that x * is satisfied with (5) by takingx = x * .…”
Section: Global Convergencementioning
confidence: 97%
“…In this section, we focus on solving (3) with X = R n + . First, we derive the closed-form solution of (L 1 /L 2 ) + , and develop a practical solver for one of its global solution, which provides an extension of Moreau's proximal theory to nonconvex setting [29,30].…”
Section: Computational Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…The high interpretability of SNMF-Net makes it easy to understand its unmixing mechanism and combine the prior knowledge on the endmembers and abundances into the network for enhanced learning. Driven by recent theoretical research on the relationship between the network parameters and dictionaries in the learned iterative shrinkage-thresholding algorithm (LISTA) [40]- [43], we further precompute and initialize the values of weight matrices using mutual coherence minimization in sparse representation theory. This simplifies the network learning and improves the robustness of the network to noises and the effectiveness of unmixing.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, non‐convex algorithms for 2D sparse recovery are also presented, such as iterative proximal‐projection approach [17], L + S model [18], robust projected generalised gradient [19] and so on [20]. Recently, learning‐based sparse recovery algorithms for 2D image are also presented, such as 2D learning proximal gradient algorithm [21], learning proximal operator method [22], just to name a few. However, the work on extending greedy algorithm for 2D case is rare, which is an important part of 2D sparse recovery algorithms.…”
Section: Introductionmentioning
confidence: 99%