The L-curve method is a well-known heuristic method for choosing the regularization parameter for ill-posed problems by selecting it according to the maximal curvature of the L-curve. In this article, we propose a simplified version that replaces the curvature essentially by the derivative of the parameterization on the y-axis. This method shows a similar behaviour to the original L-curve method, but unlike the latter, it may serve as an error estimator under typical conditions. Thus, we can accordingly prove convergence for the simplified L-curve method. is called the noise-level δ. In the case of heuristic parameter choice rules, which the L-curve method is an example of, this noise-level is considered unavailable.As the inverse of A is not bounded, the problem (1) cannot be solved by classical inversion algorithms, rather, a regularization scheme has to be applied [8]. That is, one constructs a one-parametric family of continuous operators (R α ) α , with α > 0, that in some sense approximates the inverse of A for α → 0.An approximation to the true solution of (1), denoted as x δ α , is computed by means of the regularization operators:A delicate issue in regularization schemes is the choice of the regularization parameter α and the standard methods make use of the noise-level δ. However, in situations when this is not available, so-called heuristic parameter choice methods [17] are proposed. The L-curve method selects an α corresponding to the corner point of the graph (log( Ax δ α − y δ ), log( x δ α )) parameterized by α. Recently, [17,19] a convergence theory for certain heuristic parameter choice rules was developed. Essential in this analysis is a restriction on the noise that rules out noise which is "too regular". Such noise conditions in the form of Muckenhoupt-type conditions were used in [17,19] and are currently the standard tool in the analysis of heuristic rules. If these conditions hold, then several well-known heuristic parameter choice rules serve as error estimators for the total error in typical regularization schemes and convergence and convergence rate results follow.The L-curve method, however, does not seem to be accessible to such an analysis, although some of its properties were investigated, for instance, by Hansen [13,15] and Reginska [29]. Nevertheless, it does not appear that it can be related to some sort of error estimators directly.There are various suggestions for efficient practical implementations of the L-curve method, like Krylov-space methods [6,30] or model functions [24]. Note that the method is also implemented in Hansen's Regularization Tools [14]. A generalization of the L-curve method in form of the Q-curve method was recently suggested by Raus and Hämarik [28]. Other simplifications or variations are the V-curve [9] or the U-curve [22]. Some overview and comparisons of other heuristic and non-heuristic methods are given in [2,11,12] and the PhD. thesis of Palm [26].The aim of this article is to propose a simplified version of the L-curve method by dropping several t...