The L-curve method is a well-known heuristic method for choosing the regularization parameter for ill-posed problems by selecting it according to the maximal curvature of the L-curve. In this article, we propose a simplified version that replaces the curvature essentially by the derivative of the parameterization on the y-axis. This method shows a similar behaviour to the original L-curve method, but unlike the latter, it may serve as an error estimator under typical conditions. Thus, we can accordingly prove convergence for the simplified L-curve method. is called the noise-level δ. In the case of heuristic parameter choice rules, which the L-curve method is an example of, this noise-level is considered unavailable.As the inverse of A is not bounded, the problem (1) cannot be solved by classical inversion algorithms, rather, a regularization scheme has to be applied [8]. That is, one constructs a one-parametric family of continuous operators (R α ) α , with α > 0, that in some sense approximates the inverse of A for α → 0.An approximation to the true solution of (1), denoted as x δ α , is computed by means of the regularization operators:A delicate issue in regularization schemes is the choice of the regularization parameter α and the standard methods make use of the noise-level δ. However, in situations when this is not available, so-called heuristic parameter choice methods [17] are proposed. The L-curve method selects an α corresponding to the corner point of the graph (log( Ax δ α − y δ ), log( x δ α )) parameterized by α. Recently, [17,19] a convergence theory for certain heuristic parameter choice rules was developed. Essential in this analysis is a restriction on the noise that rules out noise which is "too regular". Such noise conditions in the form of Muckenhoupt-type conditions were used in [17,19] and are currently the standard tool in the analysis of heuristic rules. If these conditions hold, then several well-known heuristic parameter choice rules serve as error estimators for the total error in typical regularization schemes and convergence and convergence rate results follow.The L-curve method, however, does not seem to be accessible to such an analysis, although some of its properties were investigated, for instance, by Hansen [13,15] and Reginska [29]. Nevertheless, it does not appear that it can be related to some sort of error estimators directly.There are various suggestions for efficient practical implementations of the L-curve method, like Krylov-space methods [6,30] or model functions [24]. Note that the method is also implemented in Hansen's Regularization Tools [14]. A generalization of the L-curve method in form of the Q-curve method was recently suggested by Raus and Hämarik [28]. Other simplifications or variations are the V-curve [9] or the U-curve [22]. Some overview and comparisons of other heuristic and non-heuristic methods are given in [2,11,12] and the PhD. thesis of Palm [26].The aim of this article is to propose a simplified version of the L-curve method by dropping several t...
We study the choice of the regularisation parameter for linear ill-posed problems in the presence of data noise and operator perturbations, for which a bound on the operator error is known but the data noise-level is unknown. We introduce a new family of semi-heuristic parameter choice rules that can be used in the stated scenario. We prove convergence of the new rules and provide numerical experiments that indicate an improvement compared to standard heuristic rules.
We study the choice of the regularisation parameter for linear ill-posed problems in the presence of noise that is possibly unbounded but only finite in a weaker norm, and when the noise-level is unknown. For this task, we analyse several heuristic parameter choice rules, such as the quasi-optimality, heuristic discrepancy, and Hanke-Raus rules and adapt the latter two to the weakly bounded noise case. We prove convergence and convergence rates under certain noise conditions. Moreover, we analyse and provide conditions for the convergence of the parameter choice by the generalised cross-validation and predictive mean-square error rules.
The choice of a suitable regularization parameter is an important part of most regularization methods for inverse problems. In the absence of reliable estimates of the noise level, heuristic parameter choice rules can be used to accomplish this task. While they are already fairly well understood and tested in the case of linear problems, not much is known about their behaviour for nonlinear problems and even less in the respective case of iterative regularization. Hence, in this paper, we numerically study the performance of some of these rules when used to determine a stopping index for Landweber iteration for various nonlinear inverse problems. These are chosen from different practically relevant fields such as integral equations, parameter estimation, and tomography.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.