2020
DOI: 10.1088/1361-6420/ab750c
|View full text |Cite
|
Sign up to set email alerts
|

1αℓ 2 minimization methods for signal and image reconstruction with impulsive noise removal

Abstract: In this paper, we study 1 − α 2 (0 < α 1) minimization methods for signal and image reconstruction with impulsive noise removal. The data fitting term is based on 1 fidelity between the reconstruction output and the observational data, and the regularization term is based on 1 − α 2 nonconvex minimization of the reconstruction output or its total variation. Theoretically, we show that under the generalized restricted isometry property that the underlying signal or image can be recovered exactly. Numerical algo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
36
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 41 publications
(36 citation statements)
references
References 56 publications
0
36
0
Order By: Relevance
“…Note that this log factor also occurs in the bound (1.5) for the TV model (1.2) and the bound (3.11) for the enhanced TV model (1.3), but it is removed if the required RIP order increases from O(s) to O(s log 3 (N )), and then both bounds can be improved to (1.6) and (3.13), respectively. Reconstruction guarantees for the model in [31] have been investigated in [30]. However, the derived error bound (see Theorem 3.8 in [30]) still fails to remove the log factor log(N 2 /s), despite that the subsampled measurements are required to have the RIP of order O(s 2 log(N )) with a more complicated level δ which depends on N , s, and the constant C in Lemma 2.1.…”
Section: Further Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that this log factor also occurs in the bound (1.5) for the TV model (1.2) and the bound (3.11) for the enhanced TV model (1.3), but it is removed if the required RIP order increases from O(s) to O(s log 3 (N )), and then both bounds can be improved to (1.6) and (3.13), respectively. Reconstruction guarantees for the model in [31] have been investigated in [30]. However, the derived error bound (see Theorem 3.8 in [30]) still fails to remove the log factor log(N 2 /s), despite that the subsampled measurements are required to have the RIP of order O(s 2 log(N )) with a more complicated level δ which depends on N , s, and the constant C in Lemma 2.1.…”
Section: Further Discussionmentioning
confidence: 99%
“…Reconstruction guarantees for the model in [31] have been investigated in [30]. However, the derived error bound (see Theorem 3.8 in [30]) still fails to remove the log factor log(N 2 /s), despite that the subsampled measurements are required to have the RIP of order O(s 2 log(N )) with a more complicated level δ which depends on N , s, and the constant C in Lemma 2.1.…”
Section: Further Discussionmentioning
confidence: 99%
“…To guarantee exact recovery of sparse solution, ℓ 1 − ℓ 2 only requires a relaxed variant of the null space property [79]. Furthermore, ℓ 1 − αℓ 2 is more robust against impulsive noise in yielding sparse, accurate solutions for inverse problems than is ℓ 1 [44]. Besides compressed sensing, it has been utilized in image denoising and deblurring [53], image segmentation [71], image inpainting [63], and hyperspectral demixing [21].…”
Section: Nonconvex Sparse Group Lassomentioning
confidence: 99%
“…It is rooted to the sparse signal under tight frame and the constrained and unconstrained x 1 − α x 2 minimizations, which has recently attracted a lot of attention. The constrained x 1 − α x 2 minimization [18,28,33,36,37,52] is recovery of x. The unconstrained x 1 − α x 2 minimization [19,33,36,37,52] is…”
Section: Contributionsmentioning
confidence: 99%
“…There is an effective algorithm based on the different of convex algorithm (DCA) to solve (1.7), see [37,52]. Numerical examples in [18,28,37,52] demonstrate that the ℓ 1 − αℓ 2 minimization consistently outperforms the ℓ 1 minimization and the ℓ p minimization in [25] when the measurement matrix A is highly coherent. Motivated by the smoothing and decomposition transformations in [47], the ℓ 1 − αℓ 2 -ASSO is written as a general nonsmooth convex optimization problem:…”
Section: Contributionsmentioning
confidence: 99%