2020
DOI: 10.1109/tsp.2020.3025731
|View full text |Cite
|
Sign up to set email alerts
|

SPOQ $\ell _p$-Over-$\ell _q$ Regularization for Sparse Signal Recovery Applied to Mass Spectrometry

Abstract: Underdetermined or ill-posed inverse problems require additional information for sound solutions with tractable optimization algorithms. Sparsity yields consequent heuristics to that matter, with numerous applications in signal restoration, image recovery, or machine learning. Since the 0 count measure is barely tractable, many statistical or learning approaches have invested in computable proxies, such as the 1 norm. However, the latter does not exhibit the desirable property of scale invariance for sparse da… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1

Relationship

4
5

Authors

Journals

citations
Cited by 23 publications
(18 citation statements)
references
References 78 publications
0
18
0
Order By: Relevance
“…where † denotes the pseudo-inverse operation. The computational cost necessary for one iteration is considerably reduced when compared to the HQ algorithm in (6). Note that the 3MG scheme is highly related to the nonlinear conjugate gradient algorithm [27], to the momentum-based accelerated gradient schemes from [28] and to trust-region approaches [29].…”
Section: Majorize Minimize-memory Gradient (3mg) Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…where † denotes the pseudo-inverse operation. The computational cost necessary for one iteration is considerably reduced when compared to the HQ algorithm in (6). Note that the 3MG scheme is highly related to the nonlinear conjugate gradient algorithm [27], to the momentum-based accelerated gradient schemes from [28] and to trust-region approaches [29].…”
Section: Majorize Minimize-memory Gradient (3mg) Algorithmmentioning
confidence: 99%
“…A first family of techniques amounts to minimizing a penalized loss function, combining data fidelity term and regularization term that incorporates prior knowledge (e.g., sparsity, positivity) on x. This approach was adopted, for instance in [3], [4] in the context of NMR relaxometry, in [5] in DOSY NMR, and in [6], [7] for MS data processing.…”
Section: Introductionmentioning
confidence: 99%
“…To prevent this possible slowdown, we propose a modified strategy relying on local majorants rather than global ones. Such kind of local majorization strategy has already been successfully used for another MM algorithm in [26], leading to a significant speed up.…”
Section: Variant With Local Majorantsmentioning
confidence: 99%
“…Email: xzhang@ee.ryerson.ca many applications. As such, various nonconvex minimizations have been developed due to their sharper approximation of L 0 norm [6][7][8].…”
Section: Introductionmentioning
confidence: 99%