2019
DOI: 10.1162/evco_a_00242
|View full text |Cite
|
Sign up to set email alerts
|

Automated Algorithm Selection: Survey and Perspectives

Abstract: It has long been observed that for practically any computational problem that has been intensely studied, different instances are best solved using different algorithms. This is particularly pronounced for computationally hard problems, where in most cases, no single algorithm defines the state of the art; instead, there is a set of algorithms with complementary strengths. This performance complementarity can be exploited in various ways, one of which is based on the idea of selecting, from a set of given algo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
199
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 317 publications
(199 citation statements)
references
References 173 publications
(212 reference statements)
0
199
0
Order By: Relevance
“…Another interesting extension of IO-Hanalyzer would be the design of modules that allow us to couple the performance evaluation with an analysis of the fitness landscape of the considered problems. Such feature-based analyses are at the heart of algorithm selection techniques [30], which use landscape features and performance data to build a model that predicts how the tested algorithms will perform on a previously not tested problem. Similar approaches can be found in per-instance-algorithm configuration (PIAC) approaches, which have recently shown very promising performance in the context of continuous black-box optimization [6].…”
Section: Discussionmentioning
confidence: 99%
“…Another interesting extension of IO-Hanalyzer would be the design of modules that allow us to couple the performance evaluation with an analysis of the fitness landscape of the considered problems. Such feature-based analyses are at the heart of algorithm selection techniques [30], which use landscape features and performance data to build a model that predicts how the tested algorithms will perform on a previously not tested problem. Similar approaches can be found in per-instance-algorithm configuration (PIAC) approaches, which have recently shown very promising performance in the context of continuous black-box optimization [6].…”
Section: Discussionmentioning
confidence: 99%
“…As algorithm selection is often implemented using machine learning [28], [29], we need two preparation steps: (i) instance features that characterise instances numerically, (ii) performance data of each algorithm on each instance. We have already characterised our corpora in Section III-B, so we only need to run each of the 17 configurations on all corpora.…”
Section: Per-corpus Configurationmentioning
confidence: 99%
“…Given that there are many optimisation algorithms available, and that each algorithm has many variants and potential hybrids, this process of selecting the optimal optimiser has the capacity to be very involved and time consuming. One way of addressing this is to use a machine learning algorithm to select or design an optimiser on the user's behalf [Rice, 1976, Kerschke et al, 2019. In particular, the hyperheuristics [Burke et al, 2013, Swan et al, 2018 community has been exploring this idea for some time, typically by using evolutionary algorithms to select or generate the heuristics used by a particular metaheuristic framework, or generating new metaheuristic frameworks by combining existing heuristics.…”
Section: Introductionmentioning
confidence: 99%