2022
DOI: 10.1109/tsp.2022.3179807
|View full text |Cite
|
Sign up to set email alerts
|

Deep Unfolding With Normalizing Flow Priors for Inverse Problems

Abstract: published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.• Users may download and print one copy of any publication from the public portal for the purpose of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…Recently, some scholars have proposed deep-unfolding denoising [38][39][40] and quantumbased denoising [41,42], which have achieved competitive results compared to state-of-theart image denoising tasks. How to draw on the ideas of these methods to denoise the point cloud is a very valuable research work in the future.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, some scholars have proposed deep-unfolding denoising [38][39][40] and quantumbased denoising [41,42], which have achieved competitive results compared to state-of-theart image denoising tasks. How to draw on the ideas of these methods to denoise the point cloud is a very valuable research work in the future.…”
Section: Related Workmentioning
confidence: 99%
“…Also, the activation function has to be invertible, which is the case for the hyperbolic tangent, but not for the ReLU. More specific solutions can be find in the field of normalizing flows [69]- [71].…”
Section: Arbitrary Transformmentioning
confidence: 99%
“…Over the last years, learned regularizers like the total deep variation [46,47] or adversarial regularizers [55,58,63] as well as extensions of plug-andplay and unrolled methods [24,78,84,88] with learned denoisers [32,35,67,91] showed promising results, see [8,59] for an overview. Furthermore, many papers leveraged the tractability of the likelihood of normalizing flows (NFs) to learn a prior [9,30,85,86,90] or use conditional variants to learn the posterior [12,53,79,87] They utilize the invertibility to optimize over the range of the flow together with the Gaussian assumption on the latent space. Also, diffusion models [39,40,76,77] have shown great generative modelling capabilities and have been used as a prior for inverse problems.…”
Section: Introductionmentioning
confidence: 99%