2021
DOI: 10.1109/tip.2021.3088611
|View full text |Cite
|
Sign up to set email alerts
|

Deep-Learned Regularization and Proximal Operator for Image Compressive Sensing

Abstract: Deep learning has recently been intensively studied in the context of image compressive sensing (CS) to discover and represent complicated image structures. These approaches, however, either suffer from nonflexibility for an arbitrary sampling ratio or lack an explicit deep-learned regularization term. This paper aims to solve the CS reconstruction problem by combining the deep-learned regularization term and proximal operator. We first introduce a regularization term using a carefully designed residual-regres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 33 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…Conventionally, SCI aims to reconstruct the original highdimensional object x ∈ R n by inputting m (m << n) random measurements y ∈ R m [33]. Mathematically, the imaging process can be described as follow:…”
Section: A Sci Problem Formulationmentioning
confidence: 99%
“…Conventionally, SCI aims to reconstruct the original highdimensional object x ∈ R n by inputting m (m << n) random measurements y ∈ R m [33]. Mathematically, the imaging process can be described as follow:…”
Section: A Sci Problem Formulationmentioning
confidence: 99%
“…\end{equation}$$In this work, we utilize the proximal momentum gradient descent algorithm to reconstruct the plain image from y$\tilde{y}$ by alternating between the gradient descent step and the proximal operating step vk=normalΦnormalTfalse(normalΦxkgoodbreak−trueyfalse)+γk1vk1,$$\begin{align} &v^k=\Phi ^\mathrm{T}(\Phi x^{k}-\tilde{y})+\gamma ^{k-1} v^{k-1}, \end{align}$$ xk+1=Dfalse(xkgoodbreak−αvkfalse),$$\begin{align} &x^{k+1} = D(x^k-\alpha v^{k}), \end{align}$$ γk=εnormalTfalse(D(xkαvk+ε)xk+1false)m,$$\begin{align} &\gamma ^k = \frac{\varepsilon ^{\mathrm{T}}(D(x^k-\alpha v^k+\varepsilon )-x^{k+1})}{m}, \end{align}$$where α is the step size, ε is a standard normal random vector, and D is the proximal operator for regularization term R . This splitting approach is an efficient CS reconstruction framework to exploit the plug‐and‐play prior [31, 32]. We set the initial guess x 0 as zero and step size α as 1.…”
Section: Drcan Prior For Image Reconstructionmentioning
confidence: 99%
“…where 𝛼 is the step size, 𝜀 is a standard normal random vector, and D is the proximal operator for regularization term R. This splitting approach is an efficient CS reconstruction framework to exploit the plug-and-play prior [31,32]. We set the initial guess x 0 as zero and step size 𝛼 as 1.…”
Section: Drcan Prior For Image Reconstructionmentioning
confidence: 99%
See 1 more Smart Citation
“…We can acquire non-local information in multiple ways, such as employing self non-local modules (based on nonlocal means [13]), global pooling modules (channel-wise and spatial-wise attention), multi-scale inputs, long-range connections and recurrent connections in the network. In IR approaches, for example, many networks employ global pooling to extract long-range dependencies, such as RCAN [23], [24] and RIDNet [25]. In these attention modules, they squeeze the feature map (channel-wise or spatial-wise) and generate attentive weights [26]- [28] based on the whole channel of the feature map or the spatial location of the feature map (which is used as the non-local features).…”
Section: Introductionmentioning
confidence: 99%