Abstract:Assume that an unknown integral operator living in some known subspace is observed indirectly, by evaluating its action on a discrete measure containing a few isolated Dirac masses at an unknown location. Is this information enough to recover the impulse response location and the operator with a sub-pixel accuracy? We study this question and bring to light key geometrical quantities for exact and stable recovery. We also propose an in-depth study of the presence of additive white Gaussian noise. We illustrate … Show more
“…We place ourselves in the case of the recovery of one spike (note that this case is often very informative for the study of limits of super-resolution algorithms [19,38]). Results on basins of attractions show that if the descent is initialized sufficiently close to the observed spike, then under the RIP condition there will be convergence to the desired position (at a linear rate).…”
Section: Presentation Of the Experimentsmentioning
In this article, we study the problem of recovering sparse spikes with over-parametrized projected descent. We first provide a theoretical study of approximate recovery with our chosen initialization method: Continuous Orthogonal Matching Pursuit without Sliding. Then we study the effect of over-parametrization on the gradient descent which highlights the benefits of the projection step. Finally, we show the improved calculation times of our algorithm compared to state-of-the-art model-based methods on realistic simulated microscopy data.
“…We place ourselves in the case of the recovery of one spike (note that this case is often very informative for the study of limits of super-resolution algorithms [19,38]). Results on basins of attractions show that if the descent is initialized sufficiently close to the observed spike, then under the RIP condition there will be convergence to the desired position (at a linear rate).…”
Section: Presentation Of the Experimentsmentioning
In this article, we study the problem of recovering sparse spikes with over-parametrized projected descent. We first provide a theoretical study of approximate recovery with our chosen initialization method: Continuous Orthogonal Matching Pursuit without Sliding. Then we study the effect of over-parametrization on the gradient descent which highlights the benefits of the projection step. Finally, we show the improved calculation times of our algorithm compared to state-of-the-art model-based methods on realistic simulated microscopy data.
“…The condition (Cond 1 ) is strongly connected to the Cramér-Rao lower-bound. Using assumptions (10) and (11), we obtain…”
Section: Relationship To Cramér-raomentioning
confidence: 99%
“…□ Let us assume that R > RL 2 . Without loss of generality, we can also assume that R µ ⩽ RL 2 , since the inequalities in (11) are still valid when replacing B Rµ by B R ′ with R ′ ⩽ R µ . In this setting, the success condition…”
Section: Lemma D1 (Expectation and Tail Bounds For The Supremum)mentioning
confidence: 99%
“…To the best of our knowledge, deriving theoretical upper-bounds remains an open research area that is at the heart of the present work. One of the authors recently conducted a similar study in [8], for the case of blind inverse problems with unknown weights. However, the proof was suboptimal and did not allow us to reach the Cramér-Rao bound asymptotically when σ → 0, contrarily to the present work.…”
Section: A Brief Tour Of Existing Performance Boundsmentioning
Single source localization from low-pass filtered measurements is ubiquitous in optics, wireless communications and sound processing. 
We analyse the performance of the maximum likelihood estimator (MLE) in this context with additive white Gaussian noise.
We derive necessary conditions and sufficient conditions on the maximum admissible noise level to reach a given precision with high probability. 
The two conditions match closely, with a discrepancy related to the conditioning of a noiseless cost function.
They tightly surround the Cramer-Rao lower bound for low noise levels. 
However, they are significantly more precise to describe the performance of the MLE for larger levels.
We propose a neural network architecture and a training procedure to estimate blurring operators and deblur images from a single degraded image. Our key assumption is that the forward operators can be parameterized by a low-dimensional vector. The models we consider include a description of the point spread function with Zernike polynomials in the pupil plane or product-convolution expansions, which incorporate space-varying operators. Numerical experiments show that the proposed method can accurately and robustly recover the blur parameters even for large noise levels. For a convolution model, the average signal-to-noise ratio of the recovered point spread function ranges from 13 dB in the noiseless regime to 8 dB in the high-noise regime. In comparison, the tested alternatives yield negative values. This operator estimate can then be used as an input for an unrolled neural network to deblur the image. Quantitative experiments on synthetic data demonstrate that this method outperforms other commonly used methods both perceptually and in terms of SSIM. The algorithm can process a 512
$ \times $
512 image under a second on a consumer graphics card and does not require any human interaction once the operator parameterization has been set up.1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.