“…Over the last years, learned regularizers like the total deep variation [46,47] or adversarial regularizers [55,58,63] as well as extensions of plug-andplay and unrolled methods [24,78,84,88] with learned denoisers [32,35,67,91] showed promising results, see [8,59] for an overview. Furthermore, many papers leveraged the tractability of the likelihood of normalizing flows (NFs) to learn a prior [9,30,85,86,90] or use conditional variants to learn the posterior [12,53,79,87] They utilize the invertibility to optimize over the range of the flow together with the Gaussian assumption on the latent space. Also, diffusion models [39,40,76,77] have shown great generative modelling capabilities and have been used as a prior for inverse problems.…”