2020
DOI: 10.1049/iet-ipr.2020.0194
|View full text |Cite
|
Sign up to set email alerts
|

Simple algorithm for L 1‐norm regularisation‐based compressed sensing and image restoration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…where the sparsity of the vector Yx encouraged by the l 1 norm (Qin, 2020). In addition, the common idea of network-based CS method is to replace the operators in traditional CS methods with neural networks (Liu et al, 2021).…”
Section: Cs Reconstruction Methodsmentioning
confidence: 99%
“…where the sparsity of the vector Yx encouraged by the l 1 norm (Qin, 2020). In addition, the common idea of network-based CS method is to replace the operators in traditional CS methods with neural networks (Liu et al, 2021).…”
Section: Cs Reconstruction Methodsmentioning
confidence: 99%
“…The regularized model approach investigated both the TV norm (as provided above) and the L 1 norm 16,17 as penalty terms. The L 1 norm is defined as…”
Section: Indirect Solutionsmentioning
confidence: 99%
“…The regularized model approach investigated both the TV$TV$ norm (as provided above) and the L1$L_1$ norm 16,17 as penalty terms. The L1$L_1$ norm is defined as false∥xfalse∥1=i=1n|[x]i|.\begin{equation*} \Vert x\Vert _1 = \sum _{i=1}^n |[x]_i|.…”
Section: Introductionmentioning
confidence: 99%
“…The first term in Equation ( 7) is the matrix regression loss function. Inspired by the basic idea of sparse regression for feature selection in [8], the sparse matrix regression (SMR) model can be obtained by replacing the Frobenius norm in the second term of the function (7) with the L 2,1 -norm: min…”
Section: Matrix Regression (Mr)mentioning
confidence: 99%
“…Zhang et al [6] claimed that the collaborative representation strategy plays a more important role than the L 1 -norm based sparsity constraint by analyzing the working mechanism of SRC and proposed a collaborative representation-based classification (CRC). Qin [7] relaxed the L 1 -norm regularized optimization problem to a non-linear optimization problem with L 1 -norm approximation by a smoothening function, which can be solved by existing powerful non-linear optimization methods. Nie et al [8] present a robust linear regression model based on sparse constraints and design a robust feature selection (RFS) algorithm.…”
Section: Introductionmentioning
confidence: 99%