Variational methods are widely applied to ill-posed inverse problems for they have the ability to embed prior knowledge about the solution. However, the level of performance of these methods significantly depends on a set of parameters, which can be estimated through computationally expensive and timeconsuming methods. In contrast, deep learning offers very generic and efficient architectures, at the expense of explainability, since it is often used as a black-box, without any fine control over its output. Deep unfolding provides a convenient approach to combine variational-based and deep learning approaches. Starting from a variational formulation for image restoration, we develop iRestNet, a neural network architecture obtained by unfolding a proximal interior point algorithm. Hard constraints, encoding desirable properties for the restored image, are incorporated into the network thanks to a logarithmic barrier, while the barrier parameter, the stepsize, and the penalization weight are learned by the network. We derive explicit expressions for the gradient of the proximity operator for various choices of constraints, which allows training iRestNet with gradient descent and backpropagation. In addition, we provide theoretical results regarding the stability of the network for a common inverse problem example. Numerical experiments on image deblurring problems show that the proposed approach compares favorably with both state-of-the-art variational and machine learning methods in terms of image quality.
Underdetermined or ill-posed inverse problems require additional information for sound solutions with tractable optimization algorithms. Sparsity yields consequent heuristics to that matter, with numerous applications in signal restoration, image recovery, or machine learning. Since the 0 count measure is barely tractable, many statistical or learning approaches have invested in computable proxies, such as the 1 norm. However, the latter does not exhibit the desirable property of scale invariance for sparse data. Generalizing the SOOT Euclidean/Taxicab 1/ 2 norm-ratio initially introduced for blind deconvolution, we propose SPOQ, a family of smoothed scale-invariant penalty functions. It consists of a Lipschitz-differentiable surrogate for p-over-q quasi-norm/norm ratios with p ∈ ]0, 2[ and q ≥ 2. This surrogate is embedded into a novel majorize-minimize trust-region approach, generalizing the variable metric forwardbackward algorithm. For naturally sparse mass-spectrometry signals, we show that SPOQ significantly outperforms 0, 1, Cauchy, Welsch, and CEL0 penalties on several performance measures. Guidelines on SPOQ hyperparameters tuning are also provided, suggesting simple data-driven choices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.