2020
DOI: 10.1016/j.image.2020.115989
|View full text |Cite
|
Sign up to set email alerts
|

iPiano-Net: Nonconvex optimization inspired multi-scale reconstruction network for compressed sensing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(9 citation statements)
references
References 43 publications
0
9
0
Order By: Relevance
“…DUNs on CS and compressive sensing MRI (CS-MRI) usually integrate some effective convolutional neural network (CNN) denoisers into some optimization methods including half quadratic splitting (HQS) [46][5] [1], alternating minimization (AM) [26][31] [53], iterative shrinkage-thresholding algorithm (ISTA) [40][8] [41], approximate message passing (AMP) [48][54], alternating direction method of multipliers (ADMM) [37] and inertial proximal algorithm for nonconvex optimization (iPiano) [30]. Different optimization methods usually lead to different optimization-inspired DUNs.…”
Section: Deep Unfolding Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…DUNs on CS and compressive sensing MRI (CS-MRI) usually integrate some effective convolutional neural network (CNN) denoisers into some optimization methods including half quadratic splitting (HQS) [46][5] [1], alternating minimization (AM) [26][31] [53], iterative shrinkage-thresholding algorithm (ISTA) [40][8] [41], approximate message passing (AMP) [48][54], alternating direction method of multipliers (ADMM) [37] and inertial proximal algorithm for nonconvex optimization (iPiano) [30]. Different optimization methods usually lead to different optimization-inspired DUNs.…”
Section: Deep Unfolding Networkmentioning
confidence: 99%
“…Each image block of size 33 × 33 is sampled and reconstructed independently for the first 400 epochs, and for the last ten epochs, we adopt larger image blocks of size 99 × 99 as inputs to further fine-tune the model. To alleviate blocking artifacts, we firstly unfold the blocks of size 99×99 into overlapping blocks of size 33×33 while sampling process Φx and then fold the blocks of size 33 × 33 into larger blocks while initialization Φ ⊤ y [30]. We also unfold the whole image with this approach during testing.…”
Section: Experiments 41 Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep unfolding models denote a series of models constructed by mapping iterative algorithms with unfixed numbers of steps onto deep neural networks with fixed numbers of steps [26], [8], [19], [9]. There are a lot of non-linear iterative algorithms that are unfolded, such as ISTA [27], [8], AMP [26], [9], half-quadratic splitting (HQS) [19], alternating direction methods of multipliers (ADMM) [28], [29] and iPiano algorithm [30]. By combining the interpretability of model-based methods and the trainable characteristics of traditional deep learning models, they make a good balance between reconstruction performance and interpretation.…”
Section: Introductionmentioning
confidence: 99%
“…At present, there exist a few methods [32], [30], [33], [34], [17] which reconstruct images at different CS ratios using only one model, and they can be roughly cast into two categories. The first kind [32], [30] trains a single model with a set of sampling matrices with different CS ratios so that the model can adapt to all sampling matrices in this set.…”
Section: Introductionmentioning
confidence: 99%