2019
DOI: 10.1109/tci.2018.2882698
|View full text |Cite
|
Sign up to set email alerts
|

Solving Inverse Computational Imaging Problems Using Deep Pixel-Level Prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 24 publications
0
20
0
Order By: Relevance
“…For compressive MRI recovery, we used PnP ADMM from (13) with f (·) as the CNN-based denoiser described above; we will refer to the combination as PnP-CNN. We employed a total of 100 ADMM iterations, and in each ADMM iteration, we performed four steps of CG to approximate (12), for which we used σ 2 = 1 = η.…”
Section: Demonstration Of Pnp In Mri a Parallel Cardiac Mrimentioning
confidence: 99%
See 1 more Smart Citation
“…For compressive MRI recovery, we used PnP ADMM from (13) with f (·) as the CNN-based denoiser described above; we will refer to the combination as PnP-CNN. We employed a total of 100 ADMM iterations, and in each ADMM iteration, we performed four steps of CG to approximate (12), for which we used σ 2 = 1 = η.…”
Section: Demonstration Of Pnp In Mri a Parallel Cardiac Mrimentioning
confidence: 99%
“…Also, since CNN training occurs in the presence of dataset-specific forward models, generalization from training to test scenarios remains an open question [9]. Other learning-based methods have been proposed based on bi-level optimization (e.g., [10]), adversarially learned priors (e.g., [12]), and autoregressive priors (e.g., [13]). Consequently, the integration of learning-based methods into physical inverse problems remains a fertile area of research.…”
Section: Introductionmentioning
confidence: 99%
“…A class of deep learning based solution involves learning of regularizers or proximal mapping stage and then iteratively solving a MAP problem. Methods like [21], [22], [23] fall under this category. Another class of algorithm is designed as a feed-forward deep neural network that has either been trained in a supervised or self-supervised manner.…”
Section: Image Reconstructionmentioning
confidence: 99%
“…where θ is a vector of parameters that controls both the penalty function φ and the analysis filters, L. Regularizers of other forms can also be learned, e.g., [63] use a pixel-wise autoregressive model as a regularizer.…”
Section: Learning the Regularization Termmentioning
confidence: 99%