The possibilities of exploiting the special structure of d.c. programs, which consist of optimising the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave part, or both, are evaluated by one of their subgradients. In this paper we propose an algorithm which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primal-dual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimisation problem. Furthermore, we show the connection to the Toland dual problem and prove a descent property for the objective function values of a primal-dual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka–Łojasiewicz property. In the last part, we apply the algorithm to an image processing model.
We first point out several flaws in the recent paper [R. Shefi, M. Teboulle: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization, SIAM J. Optim. 24, 269-297, 2014] that proposes two ADMM-type algorithms for solving convex optimization problems involving compositions with linear operators and show how some of the considered arguments can be fixed. Besides this, we formulate a variant of the ADMM algorithm that is able to handle convex optimization problems involving an additional smooth function in its objective, and which is evaluated through its gradient. Moreover, in each iteration we allow the use of variable metrics, while the investigations are carried out in the setting of infinite dimensional Hilbert spaces. This algorithmic scheme is investigated from point of view of its convergence properties.
In this work, we consider methods for solving large-scale optimization problems with a possibly nonsmooth objective function. The key idea is to first specify a class of optimization algorithms using a generic iterative scheme involving only linear operations and applications of proximal operators. This scheme contains many modern primal-dual first-order solvers like the Douglas-Rachford and hybrid gradient methods as special cases. Moreover, we show convergence to an optimal point for a new method which also belongs to this class. Next, we interpret the generic scheme as a neural network and use unsupervised training to learn the best set of parameters for a specific class of objective functions while imposing a fixed number of iterations. In contrast to other approaches of "learning to optimize", we present an approach which learns parameters only in the set of convergent schemes. As use cases, we consider optimization problems arising in tomographic reconstruction and image deconvolution, and in particular a family of total variation regularization problems. * equal contribution arXiv:1808.00946v1 [math.OC] 2 Aug 2018 X-ray computed tomography (CT) [40,41], magnetic resonance imaging (MRI) [18], and electron tomography [42].A key challenge is to handle the computational burden. In imaging, and especially so for three-dimensional imaging, the resulting optimization problem is very high-dimensional even after clever digitization and might involve more than one billion variables. Moreover, many regularizers that are popular in imaging (see Section 5), like those associated with sparsity, result in a nonsmooth objective function. These issues prevent usage of variational methods in time-critical applications, such as medical imaging in a clinical setting. Modern methods which aim at overcoming these obstacles are typically based on the proximal point algorithm [46] and operator splitting techniques, see e.g., [10, 12, 14-16, 20-22, 25, 29, 33, 34] and references therein.The main objective of the paper is to offer a computationally tractable approach for minimizing large-scale nondifferentiable, convex functions. The key idea is to "learn" how to optimize from training data, resulting in an iterative scheme that is optimal given a fixed number of steps, while its convergence properties can be analyzed. We will make this precise in Section 4.Similar ideas have been proposed previously in [8,27,35], but these approaches are either limited to specific classes of iterative schemes, like gradientdescent-like schemes [8,35] that are not applicable for nonsmooth optimization, or specialized to a specific class of regularizers as in [27], which limits the possible choices of regularizers and forward operators. The approach taken here leverages upon these ideas and yields a general framework for learning optimization algorithms that are applicable to solving optimization problems of the type (1.1), inspired by the proximal-type methods mentioned above.A key feature is to present a general formulation that includes several...
Abstract. In this paper we are concerned with solving monotone inclusion problems expressed by the sum of a set-valued maximally monotone operator with a single-valued maximally monotone one and the normal cone to the nonempty set of zeros of another set-valued maximally monotone operator. Depending on the nature of the single-valued operator, we will propose two iterative penalty schemes, both addressing the set-valued operators via backward steps. The single-valued operator will be evaluated via a single forward step if it is cocoercive, and via two forward steps if it is monotone and Lipschitz continuous. The latter situation represents the starting point for dealing with complexly structured monotone inclusion problems from algorithmic point of view.
The backward-backward algorithm is a tool for finding minima of a regularization of the sum of two convex functions in Hilbert spaces. We generalize this setting to Hadamard spaces and prove the convergence of an error-tolerant version of the backward-backward method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.