Per Instance Algorithm Configuration (PIAC) relies on features that describe problem instances. It builds an Empirical Performance Model (EPM) from a training set made of (instance, parameter configuration) pairs together with the corresponding performance of the algorithm at hand. This paper presents a case study in the continuous black-box optimization domain, using features proposed in the literature. The target algorithm is CMA-ES, and three of its hyper-parameters. Special care is taken to the computational cost of the features. The EPM is learned on the BBOB benchmark, but tested on independent test functions gathered from the optimization literature.The results demonstrate that the proposed approach can outperform the default setting of CMA-ES with as few as 30 or 50 time the problem dimension additional function evaluations for feature computation.
International audienceAlgorithm Configuration is still an intricate problem especially in the continuous black box optimization domain. This paper empirically investigates the relationship between continuous problem features (measuring different problem characteristics) and the best parameter configuration of a given stochastic algorithm over a bench of test functions — namely here, the original version of Differential Evolution over the BBOB test bench. This is achieved by learning an empirical performance model from the problem features and the algorithm parameters. This performance model can then be used to compute an empirical optimal parameter configuration from features values. The results show that reasonable performance models can indeed be learned, resulting in a better parameter configuration than a static parameter setting optimized for robustness over the test bench
A possible approach to Algorithm Selection and Configuration for continuous black box optimization problems relies on problem features, computed from a set of evaluated sample points. However, the computation of the features proposed in the literature require a rather large number of such sample points, unlikely to be practical for expensive real-world problems. On the other hand, surrogate models have been proposed to tackle the optimization of expensive objective function. It is proposed in this paper to use surrogate models to approximate the values of the features at reasonable computational cost. Two experimental studies are conducted, using the well-known BBOB framework as testbench. First, the effect of sub-sampling is analyzed. Then, a methodology to compute approximate values for the features using a surrogate model is proposed, and validated from the point of view of retrieving BBOB classes. It is shown that when only small computational budgets are available, using surrogate models as proxies to compute the features can be beneficial.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.