The software package SNOBFIT for bound-constrained (and soft-constrained) noisy optimization of an expensive objective function is described. It combines global and local search by branching and local fits. The program is made robust and flexible for practical use by allowing for hidden constraints, batch function evaluations, change of search regions, etc.
Abstract. Results are reported of testing a number of existing state of the art solvers for global constrained optimization and constraint satisfaction on a set of over 1000 test problems in up to 1000 variables. OverviewAs the recent survey Neumaier [24] of complete solution techniques in global optimization documents, there are now about a dozen solvers for constrained global optimization that claim to solve global optimization and/or constraint satisfaction problems to global optimality by performing a complete search.Within the COCONUT project [30,31], we evaluated many of the existing software packages for global optimization and constraint satisfaction problems. This is the first time that different constrained global optimization and constraint satisfaction algorithms are compared on a systematic basis and with a test set that allows to derive statistically significant conclusions. We tested the global solvers BARON, GlobSol, ICOS, LGO, LINGO, OQNLP, Premium Solver, the local solver MINOS, and a basic combination strategy COCOS implemented in the COCONUT platform. 1The testing process turned out to be extremely time-consuming, due to various reasons not initially anticipated. A lot of effort went into creating appropriate interfaces, making the comparison fair and reliable, and making it possible to process a large number of test examples in a semiautomatic fashion.In a recent paper about testing local optimization software, Dolan & Moré [6,7] write: We realize that testing optimization software is a notoriously difficult problem and that there may be objections to the testing presented in this report. For example, performance of a particular solver may improve significantly if non-default options are given. Another objection is that we only use one starting point per problem and that the performance of a solver may be sensitive to the choice of starting point. We also have used the default stopping criteria of the solvers. This choice may bias results but should not affect comparisons that rely on large time differences. In spite of these objections, we feel that it is essential that we provide some indication of the performance of optimization solvers on interesting problems.These difficulties are also present with our benchmarking studies. Section 2 describes our testing methodology. We use a large test set of over 1000 problems from various collections. Our main performance criterium is currently how often the attainment of the global optimum, or the infeasibility of a problem, is correctly or incorrectly claimed (within some time limit). All solvers are tested with the default options suggested by the providers of the codes, with the request to stop at a time limit or after the solver believed that first global solution was obtained.These are very high standards, much more demanding than what had been done by anyone before. Thorough comparisons are indeed very rare, due to the difficulty of performing extensive and meaningful testing. Indeed, we know of only two comparative studies [17,22] in global ...
No abstract
Four methods for global numerical black box optimization with origins in the mathematical programming community are described and experimentally compared with the state of the art evolutionary method, BIPOP-CMA-ES. The methods chosen for the comparison exhibit various features that are potentially interesting for the evolutionary computation community: systematic sampling of the search space (DIRECT, MCS) possibly combined with a local search method (MCS), or a multi-start approach (NEWUOA, GLOBAL) possibly equipped with a careful selection of points to run a local optimizer from (GLOBAL). The recently proposed “comparing continuous optimizers” (COCO) methodology was adopted as the basis for the comparison. Based on the results, we draw suggestions about which algorithm should be used depending on the available budget of function evaluations, and we propose several possibilities for hybridizing evolutionary algorithms (EAs) with features of the other compared algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.