In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques.We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type.We prove under the common assumptions used in direct search for single optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step.Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.
In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.
PSwarm was developed originally for the global optimization of functions without derivatives and where the variables are within upper and lower bounds. The underlying algorithm used is a pattern search method, or more specifically, a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the (optional) search step of coordinate search, the algorithm incorporates a particle swarm scheme for dissemination of points in the feasible region, equipping the overall method with the capability of finding a global minimizer. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods based only on function values.PSwarm is extended is this paper to handle general linear constraints. The poll step now incorporates positive generators for the tangent cone of the approximated active constraints, including a provision for the degenerate case. The search step has also been adapted accordingly. In particular, the initial population for particle swarm used in the search step is computed by first inscribing an ellipsoid of maximum volume to the feasible set. We have again compared PSwarm to other solvers (including some designed for global optimization) and the results confirm its competitiveness in terms of efficiency and robustness.
A number of first-order methods have been proposed for smooth multiobjective optimization for which some form of convergence to first order criticality has been proved. Such convergence is global in the sense of being independent of the starting point. In this paper we analyze the rate of convergence of gradient descent for smooth unconstrained multiobjective optimization, and we do it for non-convex, convex, and strongly convex vector functions. These global rates are shown to be the same as for gradient descent in single-objective optimization, and correspond to appropriate worst case complexity bounds. In the convex cases, the rates are given for implicit scalarizations of the problem vector function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.