This paper describes a new MATLAB software package of iterative regularization methods and test problems for large-scale linear inverse problems. The software package, called IR TOOLS, serves two related purposes: we provide implementations of a range of iterative solvers, including several recently proposed methods that are not available elsewhere, and we provide a set of large-scale test problems in the form of discretizations of 2D linear inverse problems. The solvers include iterative regularization methods where the regularization is due to the semi-convergence of the iterations, Tikhonov-type formulations where the regularization is explicitly formulated in the form of a regularization term, and methods that can impose bound constraints on the computed solutions. All the iterative methods are implemented in a very flexible fashion that allows the problem's coefficient matrix to be available as a (sparse) matrix, a function handle, or an object. The most basic call to all of the various iterative methods requires only this matrix and the right hand side vector; if the method uses any special stopping criteria, regularization parameters, etc., then default values are set automatically by the code. Moreover, through the use of an optional input structure, the user can also have full control of any of the algorithm parameters. The test problems represent realistic large-scale problems found in image reconstruction and several other applications.
This paper introduces two new algorithms, belonging to the class of Arnoldi--Tikhonov regularization methods, which are particularly appropriate for sparse reconstruction. The main idea is to consider suitable adaptively defined regularization matrices that allow the usual 2-norm regularization term to approximate a more general regularization term expressed in the $p$-norm, $p\geq 1$. The regularization matrix can be updated both at each step and after some iterations have been performed, leading to two different approaches: the first one is based on the idea of the iteratively reweighted least squares method and can be obtained considering flexible Krylov subspaces; the second one is based on restarting the Arnoldi algorithm. Numerical examples are given in order to show the effectiveness of these new methods, and comparisons with some other already existing algorithms are made
In the framework of iterative regularization techniques for large-scale linear ill-posed problems, this paper introduces a novel algorithm for the choice of the regularization parameter when performing the Arnoldi–Tikhonov method. Assuming that we can apply the discrepancy principle, this new strategy can work without restrictions on the choice of the regularization matrix. Moreover, this method is also employed as a procedure to detect the noise level whenever it is just overestimated. Numerical experiments arising from the discretization of integral equations and image restoration are presented
In this paper we develop flexible Krylov methods for efficiently computing regularized solutions to large-scale linear inverse problems with an ℓ 2 fit-to-data term and an ℓ p penalization term, for p ≥ 1. First we approximate the p-norm penalization term as a sequence of 2-norm penalization terms using adaptive regularization matrices in an iterative reweighted norm fashion, and then we exploit flexible preconditioning techniques to efficiently incorporate the weight updates. To handle general (non-square) ℓ p -regularized least-squares problems, we introduce a flexible Golub-Kahan approach and exploit it within a Krylov-Tikhonov hybrid framework. The key benefits of our approach compared to existing optimization methods for ℓ p regularization are that efficient projection methods replace inner-outer schemes and that expensive regularization parameter selection techniques can be avoided. Theoretical insights are provided, and numerical results from image deblurring and tomographic reconstruction illustrate the benefits of this approach, compared to well-established methods. Furthermore, we show that our approach for p = 1 can be used to efficiently compute solutions that are sparse with respect to some transformations. 1where b ∈ R m is the observed data, A ∈ R m×n models the forward process, x true ∈ R n is the desired solution, and e ∈ R m represents noise or errors in the observation. Due to the ill-posedness of the underlying problem [18], regularization should be applied to recover a meaningful approximation of x true in (1). In this paper, we are interested in problems of the form minwhere · p for p ≥ 1 is the vectorial p-norm, λ > 0 is a regularization parameter, and Ψ ∈ R n×n is a nonsingular matrix. For p = 2 and Ψ = I, (2) is the standard Tikhonov regularization problem, and many efficient techniques, including hybrid iterative methods, have been proposed, see, e.g., [5,10,28,22]. However, optimization problems (2) for p = 2 can be significantly more challenging. For example, for p = 1, the so-called ℓ 1 -regularized problem suffers from non-differentiability at the origin; moreover, in some situations, one may wish to consider 0 < p < 1, which results in a nonconvex optimization problem, see, e.g., [20,24,25]. In this paper, we will focus on p ≥ 1, and henceforth we will refer to problem (2) with Ψ = I as an "ℓ p -regularized problem" and problem (2) with Ψ = I will be dubbed the "transformed ℓ p -regularized" problem. Typically the transformed ℓ p -regularized problem arises in cases where sparsity in some frequency domain (e.g., in a wavelet domain) is desired. Depending on the application, a sparsity transform may be included in both the fit-to-data and the regularization term. This was considered in [3] for image deblurring problems, where the resulting minimization problem was solved with an inner-outer iteration scheme.Most of the previously developed methods for ℓ p minimization utilize nonlinear optimization schemes or iteratively reweighted optimization schemes, which can get very expensive d...
This paper introduces a new strategy for setting the regularization parameter when solving large-scale discrete ill-posed linear problems by means of the Arnoldi-Tikhonov method. This new rule is essentially based on the discrepancy principle, although no initial knowledge of the norm of the error that affects the right-hand side is assumed; an increasingly more accurate approximation of this quantity is recovered during the Arnoldi algorithm. Some theoretical estimates are derived in order to motivate our approach. Many numerical experiments, performed on classical test problems as well as image deblurring are presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.