2011 13th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing 2011
DOI: 10.1109/synasc.2011.7
|View full text |Cite
|
Sign up to set email alerts
|

Communication Schemes of a Parallel Fluid Solver for Multi-scale Environmental Simulations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
4
3

Relationship

6
1

Authors

Journals

citations
Cited by 9 publications
(16 citation statements)
references
References 8 publications
1
15
0
Order By: Relevance
“…In [22], the basic communication routines were introduced, and they are displayed in Figure 2. The parallelization strategy is based on a distribution of logical grids to different processes, where the data grids can use their internal finite difference stencil and the ghost halos are exchanged at specific times.…”
Section: Data Exchangementioning
confidence: 99%
“…In [22], the basic communication routines were introduced, and they are displayed in Figure 2. The parallelization strategy is based on a distribution of logical grids to different processes, where the data grids can use their internal finite difference stencil and the ghost halos are exchanged at specific times.…”
Section: Data Exchangementioning
confidence: 99%
“…Next experiments addressed a full time step update, i. e. an iterative solution of the pressure Poisson equation until convergence was reached. The setup was exactly the same as in the previous experiment, namely a fully refined 3D domain using l-grid refinement levels of (2,2,2) and d-grid sizes of (16,16,16) for different depths (5)(6)(7)(8), leading to around 20 million l-grids with more than 78.5 billion d-grid cells and over 707 billion variables on depth 8. It is worth notifying that this setup also requires 28 TByte of combined main memory for storing all relevant data.…”
Section: Scaling Resultsmentioning
confidence: 99%
“…The communication phase consists of 3 sequential steps as described in Frisch et al First of all, all d‐grids that have not yet been updated during the computation phase are set to the averaged values of their corresponding children d‐grids in a bottom‐up step. In a second, horizontal step, all adjacent d‐grids update their ghost layers before all resulting ghost layers of d‐grids are set properly on different levels (due to an adaptive grid refinement) in a final top‐down step.The l‐grid management hereby has to pay attention to flux conservation across d‐grid boundaries to guarantee data integrity and consistency.…”
Section: Fluid Flow Simulationsmentioning
confidence: 99%