The problem of minimization of the least squares functional with a Fréchet differentiable, lower semi-continuous, convex penalizer J is considered to be solved. The penalizer maps the functions of Banach space V into R + , J : V → R + . To be more precise, we also assume that some given measured data f δ is defined on a compactly supported domain Z ⊂ R + and in the class of Hilbert space, f δ ∈ H = L 2 (Z). Then the general Tikhonov cost functional, associated with some given linear, compact and injective forward operator T : V → L 2 (Z), is formulated asConvergence of the regularized optimum solution ϕ α(δ) ∈ arg min ϕ∈V F α (ϕ, f δ ) to the true solution ϕ † is analysed by means of Bregman distance.First part of this work aims to provide some general convergence analysis for generally strongly convex functional J in the cost functional F α . In this part the key observation is that strong convexity of the penalty term J with its convexity modulus implies norm convergence in the Bregman metric sense. We also study the characterization of convergence by means of a concave, monotonically increasing index function Ψ : [0, ∞) → [0, ∞) with Ψ(0) = 0. In the second part, this general analysis will be interepreted for the smoothed-TV functional ,where Ω is a compact and convex domain. To this end, a new lower bound for the Hessian of J TV β will be estimated. The result of this work is applicable for any strongly convex functional.
This work aims to explore the regularity properties of the smoothed-TV regularization for the functions is of the class Hölder continuous. Over some compact and convex domain Ω, we study construction of multivariate function ϕ(x) : Ω ⊂ R 3 → R + as the optimized solution to the following convex minimization problem2 + βdx, for a fixed 0 < β < 1. We assume our target function to be Hölder continuous. With this assumption, we establish relation between total variation of our target function and its Hölder coefficient. We prove that the smoothed-TV regularization is an admissible regularization strategy by evaluating the discrepancy ||T ϕ α − f δ || ≤ τ δ for some fixed τ ≥ 1. To do so, we need to assume that the target function to be class of C 1+ (Ω). From here, under the fact that the penalty J(•) is strongly convex, we move on to showing the convergence of ||ϕ α − ϕ † ||, for ϕ α is the optimum and ϕ † is the true solution for the given minimization problem above. We demonstrate that strong convexity and 2−convexity are actually different names for the same concept. In addition to these facts, we make us of Bregman divergence in order to be able to quantify the rate of convergence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.