2007
DOI: 10.3319/tao.2007.18.3.593(a)
|View full text |Cite
|
Sign up to set email alerts
|

Parallelization of the NASA Goddard Cumulus Ensemble Model for Massively Parallel Computing

Abstract: Massively parallel computing, using a message passing interface (MPI), has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The implementation uses the domainresemble concept to design a code structure for both the whole domain and sub-domains after decomposition. Instead of inserting a group of MPI related statements into the model routine, these statements are packed into a single routine. In other words, only a single call statement to the model code is utilized… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2008
2008
2014
2014

Publication Types

Select...
7

Relationship

4
3

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…We chose the three-dimensional parallel version of the Goddard cloud ensemble model (GCE) for this study (Juang et al 2007, Tao et al 2003, Tao and Simpson 1993. Turbulence is parameterized via a 1.5-order closure scheme; the microphysical parameterization of clouds is a single-moment bulk scheme; and broadband shortwave-and longwave-radiative transfer modules are interactively coupled with cloud fields.…”
Section: Model Set-up and Case Descriptionmentioning
confidence: 99%
“…We chose the three-dimensional parallel version of the Goddard cloud ensemble model (GCE) for this study (Juang et al 2007, Tao et al 2003, Tao and Simpson 1993. Turbulence is parameterized via a 1.5-order closure scheme; the microphysical parameterization of clouds is a single-moment bulk scheme; and broadband shortwave-and longwave-radiative transfer modules are interactively coupled with cloud fields.…”
Section: Model Set-up and Case Descriptionmentioning
confidence: 99%
“…The GCE itself has been implemented with a 2D domain decomposition using message-passing interface version 1 (MPI-1) with good parallel efficiency. 14 Thus, an ideal solution for the course-grain parallelism implemented in the MMF is to run more copies of GCEs with higher CPU counts in parallel, while still keeping the option of taking advantage of the fine-grain parallelism inside the GCE.…”
Section: The Gce Modelmentioning
confidence: 99%
“…This property still holds for the domain-decomposition simulations. It should be pointed out that the method used here cannot achieve the "reproducibility" of parallel computing (i.e., Juang et al 2007). Due to the irregular grid points, the Chebyshev method will not achieve identical results for parallel domain-decomposition computation (Kopriva 1986(Kopriva , 1989(Kopriva , 1996Kopriva and Kolias 1996).…”
Section: -D Nonlinear Shallow Water Modelmentioning
confidence: 99%