2014
DOI: 10.1016/j.artint.2013.10.003
|View full text |Cite
|
Sign up to set email alerts
|

Algorithm runtime prediction: Methods & evaluation

Abstract: Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm's runtime as a function of problem-specific instance features. Such models have important applications to algorithm analysis, portfolio-based algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
340
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 360 publications
(344 citation statements)
references
References 83 publications
4
340
0
Order By: Relevance
“…In very recent work, we comprehensively studied EPMs based on a variety of modeling techniques that have been used for performance prediction over the years, including ridge regression [15], neural networks [23], Gaussian processes [20], regression trees [24], and random forests [22]. Overall, we found random forests and approximate Gaussian processes to perform best.…”
Section: Empirical Performance Modelsmentioning
confidence: 99%
See 3 more Smart Citations
“…In very recent work, we comprehensively studied EPMs based on a variety of modeling techniques that have been used for performance prediction over the years, including ridge regression [15], neural networks [23], Gaussian processes [20], regression trees [24], and random forests [22]. Overall, we found random forests and approximate Gaussian processes to perform best.…”
Section: Empirical Performance Modelsmentioning
confidence: 99%
“…Random forests (and also regression trees) were particularly strong for very heterogeneous benchmark sets, since their tree-based mechanism automatically groups similar inputs together and does not allow widely different inputs to interfere with the predictions for a given group. Another benefit of the tree-based methods is that hundreds of training data points already sufficed to yield competitive performance predictions in joint input spaces induced by as many as 76 algorithm parameters and 148 instance features [22]. This strong performance suggests that the functions being modeled must be relatively simple, for example by depending at most very weakly on most inputs.…”
Section: Empirical Performance Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…For comprehensive surveys about algorithm selection and runtime prediction, we refer the interested reader to [171,108]. There are also several doctoral dissertations related to the AS problem, namely: [175,105,36,69,58,120,130].…”
Section: Related Workmentioning
confidence: 99%