The aim of this paper is to introduce and study a two-step debiasing method for variational regularization. After solving the standard variational problem, the key idea is to add a consecutive debiasing step minimizing the data fidelity on an appropriate set, the so-called model manifold. The latter is defined by Bregman distances or infimal convolutions thereof, using the (uniquely defined) subgradient appearing in the optimality condition of the variational method. For particular settings, such as anisotropic 1 and TV-type regularization, previously used debiasing techniques are shown to be special cases. The proposed approach is however easily applicable to a wider range of regularizations. The two-step debiasing is shown to be well-defined and to optimally reduce bias in a certain setting.In addition to visual and PSNR-based evaluations, different notions of bias and variance decompositions are investigated in numerical studies. The improvements offered by the proposed scheme are demonstrated and its performance is shown to be comparable to optimal results obtained with Bregman iterations.where ∂ u H is the derivative of H with respect to the first argument. Now we proceed to a second step, where we only keep the subgradient p α and minimizêObviously, this problem is only of interest if there is no one-to-one relation between subgradients and primal values u, otherwise we always obtainû α = u α . The most interesting case with respect to applications is the one of J being absolutely one-homogeneous, i.e. J(λu) = |λ|J(u) for all λ ∈ R, where the subdifferential can be multivalued at least at u = 0.The debiasing step can be reformulated in an equivalent way aswith the (generalized) Bregman distance given by. We remark that for absolutely one-homogeneous J this simplifies toThe reformulation in terms of a Bregman distance indicates a first connection to Bregman iterations, which we make more precise in the sequel of the paper.Summing up, we examine the following twostep method: 1) Compute the (biased) solution u α of (1.1) with optimality condition (1.2),2) Compute the (debiased) solutionû α as the minimizer of (1.3) or equivalently (1.4).In order to relate further to the previous approaches of debiasing 1 -minimizers given only the support and not the sign, as well as the approach with linear model subspaces, we consider another debiasing approach being blind against the sign. The natural generalization in the case of an absolutely one-homogeneous functional J is to replace the second step bydenotes the infimal convolution between the Bregman distances D pα J (·, u α ) and D −pα J (·, −u α ), evaluated at u ∈ X .The infimal convolution of two functionals F and G on a Banach space X is defined asConsequently,û α is also a solution of (3.3). Theorem 4.12. The set M B = {u ∈ X | D p J (u, v) = 0} is a nonempty convex cone. Proof. The map u → D p J (u, v) is convex and nonnegative, hence {u | D p J (u, v) = 0} = {u | D p J (u, v) ≤ 0} is convex as a sublevel set of a convex functional. Moreover, for each c ≥ 0 we h...
Joint reconstruction has recently attracted a lot of attention, especially in the field of medical multi-modality imaging such as PET-MRI. Most of the developed methods rely on the comparison of image gradients, or more precisely their location, direction and magnitude, to make use of structural similarities between the images. A challenge and still an open issue for most of the methods is to handle images in entirely different scales, i.e. different magnitudes of gradients that cannot be dealt with by a global scaling of the data. We propose the use of generalized Bregman distances and infimal convolutions thereof with regard to the well-known total variation functional. The use of a total variation subgradient respectively the involved vector field rather than an image gradient naturally excludes the magnitudes of gradients, which in particular solves the scaling behavior. Additionally, the presented method features a weighting that allows to control the amount of interaction between channels. We give insights into the general behavior of the method, before we further tailor it to a particular application, namely PET-MRI joint reconstruction. To do so, we compute joint reconstruction results from blurry Poisson data for PET and undersampled Fourier data from MRI and show that we can gain a mutual benefit for both modalities. In particular, the results are superior to the respective separate reconstructions and other joint reconstruction methods.
The goal of dynamic magnetic resonance imaging (dynamic MRI) is to visualize tissue properties and their local changes over time that are traceable in the MR signal. We propose a new variational approach for the reconstruction of subsampled dynamic MR data, which combines smooth, temporal regularization with spatial total variation regularization. In particular, it furthermore uses the infimal convolution of two total variation Bregman distances to incorporate structural a-priori information from an anatomical MRI prescan into the reconstruction of the dynamic image sequence. The method promotes the reconstructed image sequence to have a high structural similarity to the anatomical prior, while still allowing for local intensity changes which are smooth in time. The approach is evaluated using artificial data simulating functional magnetic resonance imaging (fMRI), and experimental dynamic contrast-enhanced magnetic resonance data from small animal imaging using radial golden angle sampling of the k-space.
We investigate the convergence of a recently popular class of first-order primal-dual algorithms for saddle point problems under the presence of errors in the proximal maps and gradients. We study several types of errors and show that, provided a sufficient decay of these errors, the same convergence rates as for the error-free algorithm can be established. More precisely, we prove the (optimal) O(1∕N) convergence to a saddle point in finite dimensions for the class of non-smooth problems considered in this paper, and prove a O 1∕N 2 or even linear O N convergence rate if either the primal or dual objective respectively both are strongly convex. Moreover we show that also under a slower decay of errors we can establish rates, however slower and directly depending on the decay of the errors. We demonstrate the performance and practical use of the algorithms on the example of nested algorithms and show how they can be used to split the global objective more efficiently.
The goal of this paper is to further develop an approach to inverse problems with imperfect forward operators that is based on partially ordered spaces. Studying the dual problem yields useful insights into the convergence of the regularised solutions and allow us to obtain convergence rates in terms of Bregman distances -as usual in inverse problems, under an additional assumption on the exact solution called the source condition. These results are obtained for general absolutely one-homogeneous functionals. In the special case of TV-based regularisation we also study the structure of regularised solutions and prove convergence of their level sets to those of an exact solution. Finally, using the developed theory, we adapt the concept of debiasing to inverse problems with imperfect operators and propose an approach to pointwise error estimation in TV-based regularisation.Keywords: inverse problems, imperfect forward models, total variation, extended support, Bregman distances, convergence rates, error estimation, debiasing where A : L 1 (Ω) → L ∞ (Ω) is a linear operator and Ω ⊂ R m is a bounded domain. We assume that there exists a non-negative solution of (1.1).For an appropriate functional J (·) : L 1 → R + ∪{∞} we consider non-negative J -minimising solutions, which solve the following problem:We assume that the feasible set in (1.2) has at least one point with a finite value of J and denote a (possibly non-unique) solution of (1.2) byū J . Throughout this paper it is assumed that the regularisation functional J (·) is convex, proper and absolutely one-homogeneous.In practice the data f are not known precisely and only their perturbed versionf is available. In this case, we cannot simply replace the constraint Au = f in (1.2) with Au =f , since the solutions of the original problem (1.1) would no longer be feasible in this case. Therefore, we need to relax the equality in (1.2) to guarantee the feasibility of solutions of the original problem (1.1). This is the idea of the residual method [20,23]. If the error in the data is bounded by some known constant δ, the residual method accounts to solving the following constrained problem: min( 1.3)The fidelity function becomes in this case the characteristic function of the convex set {u : Au− f δ}. In the linear case, the residual method is equivalent to Tikhonov regularisation min u∈L 1with the regularisation parameter α = α(f , δ) chosen according to Morozov's discrepancy principle [23]. In many practical situations not only the data contain errors, but also the forward operator, that generated the data, are not perfectly known. In order to guarantee the feasibility of solutions of the original problem (1.1) in the constrained problem (1.3), one needs to account for the errors in the operator in the feasible set. If the errors in the operator are bounded by a known constant h (in the operator norm), the feasible set can be amended as follows in order to guarantee feasibility of the solutions of the original problem (1.1):whereà is the noisy operator. This optimi...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.