2016
DOI: 10.1016/j.parco.2016.02.001
|View full text |Cite
|
Sign up to set email alerts
|

Parallel simulated annealing using an adaptive resampling interval

Abstract: This paper presents a parallel simulated annealing algorithm that is able to achieve 90% parallel efficiency in iteration on up to 192 processors and up to 40% parallel efficiency in time when applied to a 5000-dimension Rastrigin function. Our algorithm breaks scalability barriers in the method of Chu et al. (1999) by abandoning adaptive cooling based on variance. The resulting gains in parallel efficiency are much larger than the loss of serial efficiency from lack of adaptive cooling. Our algorithm resample… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 49 publications
0
11
0
Order By: Relevance
“…There are of course, other algorithms for performing function optimization on a cluster computer. MDAS is used here, however, because its worst-case speedup characteristics are known; it is scalable; and, because it only requires solution vectors to be sent out to workers but not sent back, it has reduced inter-compute node communication overhead relative to other parallel optimization algorithms such as the simulated annealing-based algorithms developed in [70] and [71]. Further, unlike algorithms such as simulated annealing, it always makes small steps from a feasible starting point and hence is less prone to becoming trapped in an infeasible region.…”
Section: Methodsmentioning
confidence: 99%
“…There are of course, other algorithms for performing function optimization on a cluster computer. MDAS is used here, however, because its worst-case speedup characteristics are known; it is scalable; and, because it only requires solution vectors to be sent out to workers but not sent back, it has reduced inter-compute node communication overhead relative to other parallel optimization algorithms such as the simulated annealing-based algorithms developed in [70] and [71]. Further, unlike algorithms such as simulated annealing, it always makes small steps from a feasible starting point and hence is less prone to becoming trapped in an infeasible region.…”
Section: Methodsmentioning
confidence: 99%
“…Several algorithms have been used to solve this challenge. Initial studies using gene circuits used a global optimization approach called parallel Lam Simulated Annealing (pLSA) [18,64]. pLSA is a robust optimization method, that is computationally costly.…”
Section: Reverse-engineering With Gene Circuitsmentioning
confidence: 99%
“…Lastly, we parallelize the sampling of the Markov chain over all available cores (ranging from 4 to 6 on current consumer hardware) following the adaptive resampling method of Lou and Reinitz [LR16].…”
Section: Further Implementation Detailsmentioning
confidence: 99%