2005
DOI: 10.1007/11550907_2
|View full text |Cite
|
Sign up to set email alerts
|

A Neural Network Model for Inter-problem Adaptive Online Time Allocation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2006
2006
2018
2018

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 6 publications
0
9
0
Order By: Relevance
“…We considered a set of 4 parametric probability distributions (shown in Table 1 with exemplary instantiations shown in Figure 2), most of which have been widely studied to describe the RTDs of combinatorial problem solvers [Frost et al, 1997;Gagliolo and Schmidhuber, 2006a;Hutter et al, 2006]. First, we considered the Normal distribution (N) as a baseline, due to its widespread use throughout the sciences.…”
Section: Parametric Families Of Rtdsmentioning
confidence: 99%
See 2 more Smart Citations
“…We considered a set of 4 parametric probability distributions (shown in Table 1 with exemplary instantiations shown in Figure 2), most of which have been widely studied to describe the RTDs of combinatorial problem solvers [Frost et al, 1997;Gagliolo and Schmidhuber, 2006a;Hutter et al, 2006]. First, we considered the Normal distribution (N) as a baseline, due to its widespread use throughout the sciences.…”
Section: Parametric Families Of Rtdsmentioning
confidence: 99%
“…Since the runtimes of hard combinatorial solvers often vary on an exponential scale (likely due to the N P-hardness of the problems studied), a much better fit of empirical RTDs is typically achieved by a lognormal distribution (LOG); this distribution is attained if the logarithm of the runtimes is normaldistributed and has been shown to fit empirical RTDs well in previous work [Frost et al, 1997].…”
Section: Parametric Families Of Rtdsmentioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches include support vector machines (Hough & Williams, 2006;Arbelaez et al, 2009), reinforcement learning (Armstrong et al, 2006), neural networks (Gagliolo & Schmidhuber, 2005), decision tree ensembles (Hough & Williams, 2006), ensembles of general classification algorithms (Kotthoff, Miguel, & Nightingale, 2010), boosting (Bhowmick et al, 2006), hybrid approaches that combine regression and classification (Kotthoff, 2012a), multinomial logistic regression (Samulowitz & Memisevic, 2007), self-organising maps (Smith-Miles, 2008b) and clustering (Stamatatos & Stergiou, 2009;Stergiou, 2009;Kadioglu et al, 2010). Sayag et al (2006), Streeter et al (2007a compute schedules for running the algorithms in the portfolio based on a statistical model of the problem instance distribution and performance data for the algorithms.…”
Section: Per-portfolio Modelsmentioning
confidence: 99%
“…This trade-off is typically ignored in offline algorithm selection, and the size of the training set is chosen heuristically. In our previous work [13,14,15], we have kept an online view of algorithm selection, in which the only input available to the meta-learner is a set of algorithms, of unknown performance, and a sequence of problem instances that have to be solved. Rather than artificially subdividing the problem set into a training and a test set, we iteratively update the model each time an instance is solved, and use it to guide algorithm selection on the next instance.…”
Section: Introductionmentioning
confidence: 99%