2016 IEEE 28th International Conference on Tools With Artificial Intelligence (ICTAI) 2016
DOI: 10.1109/ictai.2016.0105
|View full text |Cite
|
Sign up to set email alerts
|

Learning Sequential and Parallel Runtime Distributions for Randomized Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…For instance, for many algorithms, the distribution for a certain class of problems has been observed empirically. Arbelaez et al [2] use a machine learning approach to predict the runtime distributions of several randomized algorithms. This knowledge can be used to obtain a better restart strategy for this class of problems.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, for many algorithms, the distribution for a certain class of problems has been observed empirically. Arbelaez et al [2] use a machine learning approach to predict the runtime distributions of several randomized algorithms. This knowledge can be used to obtain a better restart strategy for this class of problems.…”
Section: Introductionmentioning
confidence: 99%
“…The lognormal, the Weibull and the GP distribution were already considered as suitable in previous research articles. Most notably Arbelaez et al [3] observed that the runtime behavior of randomly generated 3-SAT instances with a clause to variable ratio of 4.2 can be described by lognormal distributions. Other results favoring the lognormal distribution are given by Arbelaez et al [2] and Truchet et al [22].…”
Section: Runtime Distributionsmentioning
confidence: 99%
“…The SATzilla feature extractor by Xu et al [25] creates the features of the instances (as motivated by Arbelaez et al [3]). It is called with the parameters -base, -ls and -lobjois, leading to a total of 81 features.…”
Section: Feature Extractionmentioning
confidence: 99%
See 2 more Smart Citations