2019
DOI: 10.1098/rsta.2018.0144
|View full text |Cite
|
Sign up to set email alerts
|

Multiscale computing for science and engineering in the era of exascale performance

Abstract: In this position paper, we discuss two relevant topics: (i) generic multiscale computing on emerging exascale high-performing computing environments, and (ii) the scaling of such applications towards the exascale. We will introduce the different phases when developing a multiscale model and simulating it on available computing infrastructure, and argue that we could rely on it both on the conceptual modelling level and also when actually executing the multiscale simulation, and maybe should further develop gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
7

Relationship

6
1

Authors

Journals

citations
Cited by 23 publications
(28 citation statements)
references
References 53 publications
0
28
0
Order By: Relevance
“…Most of the MD codes used today have been designed or adapted to run on parallel computer systems. NAMD [136], for example, designed for high-performance simulation of large biomolecular systems, has been used to simulate systems consisting of tens of millions atoms [137], although any MD codes with long-range interactions, which are communication bound, will not scale effectively in a strong sense to reach required time scales [138]. OpenMM [139] and ACEMD (the latter now uses the OpenMM kernels) [140], designed and optimized for GPUs, are among the fastest MD codes in terms of single GPU board performance.…”
Section: Software Approachesmentioning
confidence: 99%
“…Most of the MD codes used today have been designed or adapted to run on parallel computer systems. NAMD [136], for example, designed for high-performance simulation of large biomolecular systems, has been used to simulate systems consisting of tens of millions atoms [137], although any MD codes with long-range interactions, which are communication bound, will not scale effectively in a strong sense to reach required time scales [138]. OpenMM [139] and ACEMD (the latter now uses the OpenMM kernels) [140], designed and optimized for GPUs, are among the fastest MD codes in terms of single GPU board performance.…”
Section: Software Approachesmentioning
confidence: 99%
“…Mechanisms are provided in HMS to dynamically evaluate a sub‐model from another model, to extract relevant data from completed sub‐model evaluations, and to handle common errors encountered during sub‐model evaluation. The HMS approach to multiscale model development is an example of the concept of Distributed Multiscale Computing (DMC), where multiscale simulations involve the orchestrated execution of individual sub‐model components . DMC has the potential to take full advantage of emerging exascale high performance computers, but challenges remain to manage these highly‐dynamic computations on computers that have traditionally been used for monolithic codes employing domain decomposition.…”
Section: Resultsmentioning
confidence: 99%
“…Through a variety of multiscale approaches, researchers seek to bypass limits faced by discrete particle simulations [48][49][50][51][52][53][54]. This is critically important for energetic materials so that continuum simulations can incorporate the effects of microscale or mesoscale structural features in materials, while using information that is ultimately sourced from quantum mechanical simulations.…”
Section: Introductionmentioning
confidence: 99%
“…After the completion of the research described here, the MPI Forum adopted the revised model of MPI transfers [23]. This large-count model supports numbers of items to be transferred that are MPI_COUNT in size (normally 63-bit signed integers on 64-bit architectures) versus 2 31 in MPI-3.1, plus it supports all the corollary changes needed to make these work for large transfer support of point-to-point, collective, one-sided, datatype and I/O operations. A second API is provided in both C and Fortran that supports these new, BigCount modes.…”
Section: Standardization In Mpi-4mentioning
confidence: 99%
“…Hoekstra et al [31] observe that the lattice Boltzmann method possesses an algorithmic structure that will enable it to continue its scaling performance on larger supercomputers in the transition to exascale platforms. In part, this is due to the fact that, unlike some algorithms, the lattice Boltzmann royalsocietypublishing.org/journal/rsfs Interface Focus 11: 20190119 method does not possess a hard limit on scalability that inhibits performance at large scale.…”
Section: Extreme Scale Performancementioning
confidence: 99%