Order, Disorder and Criticality 2017
DOI: 10.1142/9789813232105_0006
|View full text |Cite
|
Sign up to set email alerts
|

Monte Carlo Methods for Massively Parallel Computers

Abstract: Applications that require substantial computational resources today cannot avoid the use of heavily parallel machines. Embracing the opportunities of parallel computing and especially the possibilities provided by a new generation of massively parallel accelerator devices such as GPUs, Intel's Xeon Phi or even FPGAs enables applications and studies that are inaccessible to serial programs. Here we outline the opportunities and challenges of massively parallel computing for Monte Carlo simulations in statistica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 125 publications
(212 reference statements)
0
8
0
Order By: Relevance
“…We finally consider the scaling of run times of the GC and PT techniques with the latter scaled to achieve the same success probability in finding ground states as the former. We compare the timings of the GC method to two different implementations of PT, one regular CPU code and a highly optimized implementation on graphics processing units (GPUs) [66,67]. The GPU code is about 128 times faster than the CPU implementation.…”
Section: Run Times and Computational Complexitymentioning
confidence: 99%
“…We finally consider the scaling of run times of the GC and PT techniques with the latter scaled to achieve the same success probability in finding ground states as the former. We compare the timings of the GC method to two different implementations of PT, one regular CPU code and a highly optimized implementation on graphics processing units (GPUs) [66,67]. The GPU code is about 128 times faster than the CPU implementation.…”
Section: Run Times and Computational Complexitymentioning
confidence: 99%
“…As this paper addresses academic examples, computation cost is acceptable. In addition, as MCS is “embarrassingly parallel”, it can be easily implemented to take advantage of the parallel processing capabilities of current personal computers …”
Section: Probabilistic Robust Topology Optimizationmentioning
confidence: 99%
“…In addition, as MCS is "embarrassingly parallel", it can be easily implemented to take advantage of the parallel processing capabilities of current personal computers. 34 In this work, both the expected value and the standard deviation of the output displacement response are used to construct the objective function. Before stating the robust optimization problem, evaluation of mean and standard deviation by MCS is explained.…”
Section: Probabilistic Robust Topology Optimizationmentioning
confidence: 99%
“…While these methods can significantly extend the degree to which the free-energy landscape is being explored and accelerate the convergence to equilibrium, simulations of relevant biopolymers still frequently require very substantial computational resources. The significant growth in the number of cores in available (super-)computers is unfortunately only of limited utility to simulations using these techniques, as exploration of phase space and convergence to equilibrium are intimately linked to the number of time steps, and there is only limited scope for speeding up each step through (parallel) task splitting [13]. A natural way of using the available parallel resources consists of running many short simulations independently and combining the resulting statistics to improve the degree of sampling of the relevant states [14].…”
Section: Introductionmentioning
confidence: 99%