2019 13th International Conference on Sampling Theory and Applications (SampTA) 2019
DOI: 10.1109/sampta45681.2019.9030906
|View full text |Cite
|
Sign up to set email alerts
|

Optimally Sample-Efficient Phase Retrieval with Deep Generative Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(21 citation statements)
references
References 8 publications
1
20
0
Order By: Relevance
“…Non-linear measurement models. While Theorem 1 concerns linear observation models, analogous guarantees have been provided for a variety of non-linear measurement models, including 1-bit observations [85], [105], [69], spiked matrix models [7], [28], phase retrieval [84], [53], principal component analysis [87], and general single-index models [88], [83], [86]. While these each come with their own challenges, the intuition behind their associated results is often similar to that discussed above for the linear model, with the m = O k log Lr δ scaling typically remaining.…”
Section: E Further Developmentsmentioning
confidence: 98%
See 1 more Smart Citation
“…Non-linear measurement models. While Theorem 1 concerns linear observation models, analogous guarantees have been provided for a variety of non-linear measurement models, including 1-bit observations [85], [105], [69], spiked matrix models [7], [28], phase retrieval [84], [53], principal component analysis [87], and general single-index models [88], [83], [86]. While these each come with their own challenges, the intuition behind their associated results is often similar to that discussed above for the linear model, with the m = O k log Lr δ scaling typically remaining.…”
Section: E Further Developmentsmentioning
confidence: 98%
“…In the case of phase retrieval with random weights and expansivity, solving the optimization problem min z ∥y − |AG(z)|∥ 2 2 allows for signal recovery with m being proportional to k (ignoring the n and d dependence) [53]. This dependence is information-theoretically optimal, and it is noteworthy that it is attained with an efficient algorithm under random generative priors.…”
Section: E Further Developmentsmentioning
confidence: 99%
“…Convergent methods have only been proposed for restricted uses cases. In compressed sensing problems with Gaussian measurement matrices, one can show that the objective function has a few critical points and design an algorithm to find the global optimum [13,18]. With a prior given by a VAE, and under technical hypothesis on the encoder and the VAE architecture, the Joint Posterior Maximization with Autoencoding Prior (JPMAP) framework of [11] converges toward a minimizer of the joint posterior p(x, z|y) of the image x and latent z given the observation y. JPMAP is nevertheless only designed for VAEs of limited expressiveness, with a simple fixed gaussian prior distribution over the latent space.…”
Section: Deep Generative Models For Inverse Problemsmentioning
confidence: 99%
“…There exists a plethora of methods to incorporate sparsity in phase retrieval. This includes convex approaches (Ohlsson, Yang, Dong and Sastry 2012, Li and Voroninski 2013), thresholding strategies (Wang et al 2017, Yuan, Wang and Wang 2019), greedy algorithms (Shechtman, Beck and Eldar 2014), algebraic methods (Beinert and Plonka 2017) and tools from deep learning (Hand, Leong and Voroninski 2018, Kim and Chung 2019). In the following we briefly discuss a few selected techniques in more detail.…”
Section: Convex Optimizationmentioning
confidence: 99%
“…Hand et al (2018) propose an alternative approach to model signals with a small number of parameters, based on generative models. They suppose that the signal of interest is in the range of a deep generative neural network , where the generative model is a d -layer, fully connected, feed-forward neural network with random weights.…”
Section: Convex Optimizationmentioning
confidence: 99%