The DC programming and its DC algorithm (DCA) address the problem of minimizing a function f = g − h (with g, h being lower semicontinuous proper convex functions on R n ) on the whole space. Based on local optimality conditions and DC duality, DCA was successfully applied to a lot of different and various nondifferentiable nonconvex optimization problems to which it quite often gave global solutions and proved to be more robust and more efficient than related standard methods, especially in the large scale setting. The computational efficiency of DCA suggests to us a deeper and more complete study on DC programming, using the special class of DC programs (when either g or h is polyhedral convex) called polyhedral DC programs. The DC duality is investigated in an easier way, which is more convenient to the study of optimality conditions. New practical results on local optimality are presented. We emphasize regularization techniques in DC programming in order to construct suitable equivalent DC programs to nondifferentiable nonconvex optimization problems and new significant questions which have to be answered. A deeper insight into DCA is introduced which really sheds new light on DCA and could partly explain its efficiency. Finally DC models of real world nonconvex optimization are reported.
Abstract. This paper is devoted to difference of convex functions (d.c.) optimization: d.c. duality, local and global optimality conditions in d.c. programming, the d.c. algorithm (DCA), and its application to solving the trust-region problem. The DCA is an iterative method that is quite different from well-known related algorithms. Thanks to the particular structure of the trust-region problem, the DCA is very simple (requiring only matrix-vector products) and, in practice, converges to the global solution. The inexpensive implicitly restarted Lanczos method of Sorensen is used to check the optimality of solutions provided by the DCA. When a nonglobal solution is found, a simple numerical procedure is introduced both to find a feasible point having a smaller objective value and to restart the DCA at this point. It is shown that in the nonconvex case, the DCA converges to the global solution of the trust-region problem, using only matrix-vector products and requiring at most 2m + 2 restarts, where m is the number of distinct negative eigenvalues of the coefficient matrix that defines the problem. Numerical simulations establish the robustness and efficiency of the DCA compared to standard related methods, especially for large-scale problems.
{ In this paper we present a method for optimally estimating the rotation and translation between a camera and a 3-D object from point and/or line correspondences. First we devise an error function and second we s h o w h o w to minimize this error function. The quadratic nature of this function is made possible by representing rotation and translation with a dual number quaternion. We p r o vide a detailed account of the computational aspects of a trust-region optimization method. This method compares favourably with Newton's method which has extensively been used to solve the problem at hand, with Faugeras-Toscani's linear method 6] for calibrating a camera, and with the Levenberg-Marquardt non-linear optimization method. Finally we present some experimental results which demonstrate the robustness of our method with respect to image noise and matching errors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.