In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques.We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type.We prove under the common assumptions used in direct search for single optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step.Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.
Abstract. In this paper, we introduce a number of ways of making pattern search more efficient by reusing previous evaluations of the objective function, based on the computation of simplex derivatives (e.g., simplex gradients).At each iteration, one can attempt to compute an accurate simplex gradient by identifying a sampling set of previously evaluated points with good geometrical properties. This can be done using only past successful iterates or by considering all past function evaluations.The simplex gradient can then be used to reorder the evaluations of the objective function associated with the directions used in the poll step or to update the mesh size parameter according to a sufficient decrease criterion, neither of which requires new function evaluations. A search step can also be tried along the negative simplex gradient at the beginning of the current pattern search iteration.We present these procedures in detail and apply them to a set of problems from the CUTEr collection. Numerical results show that these procedures can enhance significantly the practical performance of pattern search methods.Key words. derivative free optimization, pattern search methods, simplex gradient, poll ordering, multivariate polynomial interpolation, poisedness AMS subject classifications. 65D05, 90C30, 90C561. Introduction. We are interested in this paper in designing efficient (derivative-free) pattern search methods for nonlinear optimization problems. We focus our attention on unconstrained optimization problems of the form min x∈R n f (x).The curve representing the objective function value as a function of the number of function evaluations frequently exhibits an L-shape for pattern search runs. This class of methods, perhaps because of their directional features, is relatively good at quickly decreasing the objective function from its initial value. However, they can be slow thereafter and especially towards stationarity, when the frequency of unsuccessful iterations tends to increase.There has not been much effort in trying to develop efficient serial implementations of pattern search methods for the minimization of general functions. Some attention has been paid to parallelization (see Hough, Kolda, and Torczon [13] The goal of this paper is to develop a number of strategies for improving the efficiency of the current pattern search iteration, based on function evaluations ob-
The goal of this paper is to show that the use of minimum Frobenius norm quadratic models can improve the performance of direct-search methods. The approach taken here is to maintain the structure of directional direct-search methods, organized around a search and a poll step, and to use the set of previously evaluated points generated during a direct-search run to build the models. The minimization of the models within a trust region provides an enhanced search step. Our numerical results show that such a procedure can lead to a signicant improvement of direct search for smooth, piecewise smooth, and stochastic and nonstochastic noisy problems.
It is known that the Clarke generalized directional derivative is nonnegative along the limit directions generated by directional directsearch methods at a limit point of certain subsequences of unsuccessful iterates, if the function being minimized is Lipschitz continuous near the limit point.In this paper we generalize this result for discontinuous functions using Rockafellar generalized directional derivatives (upper subderivatives). We show that Rockafellar derivatives are also nonnegative along the limit directions of those subsequences of unsuccessful iterates when the function values converge to the function value at the limit point. This result is obtained assuming that the function is directionally Lipschitz with respect to the limit direction.It is also possible under appropriate conditions to establish more insightful results by showing that the sequence of points generated by these methods eventually approaches the limit point along the locally best branch or step function (when the number of steps is equal to two).The results of this paper are presented for constrained optimization and illustrated numerically.
Abstract. It has been shown recently that the efficiency of direct search methods that use opportunistic polling in positive spanning directions can be improved significantly by reordering the poll directions according to descent indicators built from simplex gradients.The purpose of this paper is twofold. First, we analyze the properties of simplex gradients of nonsmooth functions in the context of direct search methods like the Generalized Pattern Search (GPS) and the Mesh Adaptive Direct Search (MADS), for which there exists a convergence analysis in the nonsmooth setting. Our analysis does not require continuous differentiability and can be seen as an extension of the accuracy properties of simplex gradients known for smooth functions. Secondly, we test the use of simplex gradients when pattern search is applied to nonsmooth functions, confirming the merit of the poll ordering strategy for such problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.