ω ∈ ∂R(f ) for some ω ∈ Y.(2.4) We refer to [61] for a comprehensive treatment of the modulus of continuity for linear operators in Hilbert spaces. We finish this section with a practicable criterion to verify order optimality.Corollary 2.7 (Order optimality via the modulus of continuity). In the setting of Proposition 2.6 suppose φ : (0, ∞) → [0, ∞) is non-decreasing and that there exists a constant c ω > 0 and δ 0 > 0 such thatThen R is an order optimal reconstruction method on K.
Literature on convergence rates for sparsity promoting regularizationWe give a brief overview of the literature on convergence rate theory for 1 -regularization.In the early paper [78] from 2008 the rate O(δ 1/2 ) in the 1-norm is shown assuming that the unknown solution is sparse (i.e. has only finitely many non-vanishing entries) and that the forward operator is linear. The paper [52] provides the rate O(δ 1/2 ) for nonlinear operators under a source condition that coincides with (2.4) in the linear case. Furthermore, by additionally requiring sparsity of the unknown the authors achieve the linear rate O(δ) and discuss that in contrast to classical Tikhonov regularization, which has the highest possible rate O(δ 2/3 ), in 1 -regularization no saturation effect occurs. To the best of the author's knowledge the linear rate O(δ) was first proven in [14] for a regularization scheme similar to (2.1), which is called residual method in [52]. In [50] a linear convergence rate is shown in the more general setting of positively homogeneous functionals under the source condition (2.4) and a mild injectivity type assumption. Furthermore, in [53] it is proven (again under a mild injectivity type assumption) that the condition (2.4) is not only sufficient but even necessary for a linear convergence rate of 1 -regularization. The phenomenon of exact recovery, i.e. the question whether the support of the estimator equals the support of a sparse exact solution, is affirmatively treated in [79].However, it is usually more realistic to assume that the true solution is only approximately sparse in the sense that it can well be approximated by sparse vectors. Using a variational source condition, convergence rates are shown for non-sparse solutions in [11] for linear forward operators. Therein the analysis is based on the assumption that the unit vectors belong to the range of the adjoint operator. The rates are characterized in terms of the growth of the norms of the preimages of the unit vectors and the speed of decay of the true solution. In [3] the range condition is further discussed and the convergence rate results are extended to q -regularization with q < 1. We will discuss the latter range condition in more detail in Section 3.2.In [41] a relaxation of the condition on the unit vectors is introduced, and it is shown