In this paper, we propose a general method to devise maximum likelihood
penalized (regularized) algorithms with positivity constraints. Moreover, we
explain how to obtain ‘product forms’ of these algorithms. The algorithmic
method is based on Kuhn–Tucker first-order optimality conditions. Its application
domain is not restricted to the cases considered in this paper, but it can be
applied to any convex objective function with linear constraints. It is
specially adapted to the case of objective functions with a bounded domain,
which completely encloses the domain of the (linear) constraints. The
Poisson noise case typical of this last situation and the Gaussian additive
noise case are considered and they are associated with various forms of
regularization functions, mainly quadratic and entropy terms. The algorithms are
applied to the deconvolution of synthetic images blurred by a realistic point
spread function similar to that of Hubble Space Telescope operating in
the far-ultraviolet and corrupted by noise. The effect of the relaxation
on the convergence speed of the algorithms is analysed. The particular
behaviour of the algorithms corresponding to different forms of regularization
functions is described. We show that the ‘prior’ image is a key point in
the regularization and that the best results are obtained with Tikhonov
regularization with a Laplacian operator. The analysis of the Poisson process and
of a Gaussian additive noise leads to similar conclusions. We bring to
the fore the close relationship between Tikhonov regularization using
derivative operators, and regularization by a distance to a ‘default image’
introduced by Horne (Horne K 1985 Mon. Not. R. Astron. Soc. 213 129–41).
In 1993, Snyder et al investigated the maximum-likelihood (ML) approach to the deconvolution of images acquired by a charge-coupled-device camera and proved that the iterative method proposed by Llacer and Nuñez in 1990 can be derived from the expectation-maximization method of Dempster et al for the solution of ML problems. The utility of the approach was shown on the reconstruction of images of the Hubble space Telescope. This problem deserves further investigation because it can be important in the deconvolution of images of faint objects provided by next-generation ground-based telescopes that will be characterized by large collecting areas and advanced adaptive optics. In this paper, we first prove the existence of solutions of the ML problem by investigating the properties of the negative log of the likelihood function. Next, we show that the iterative method proposed by the above-mentioned authors is a scaled gradient method for the constrained minimization of this function in the closed and convex cone of the non-negative vectors and that, if it is convergent, the limit is a solution of the constrained ML problem. Moreover, by looking for the asymptotic behavior in the regime of high numbers of photons, we find an approximation that, as proved by numerical experiments, works well for any number of photons, thus providing an efficient implementation of the algorithm. In the case of image deconvolution, we also extend the method to take into account boundary effects and multiple images of the same object. The approximation proposed in this paper is tested on a few numerical examples.
Our approach proposed in a previous paper for the reduction of boundary effects in the deconvolution of astronomical images by the RichardsonLucy method (RLM) is extended here to the problem of multiple image deconvolution and applied to the reconstruction of the images of LINC-NIRVANA, the German-Italian beam combiner for the Large Binocular Telescope (LBT). We investigate the multiple image RLM, its accelerated version ordered subsets expectation maximization (OSEM), and the regularized versions of these two methods. In addition we show how the approach can be extended to the iterative space reconstruction algorithm (ISRA), which is an iterative method converging to nonnegative least squares solutions. Numerical simulations indicate that the approach can provide excellent results with a considerable reduction of the boundary effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.