2014
DOI: 10.1016/j.jhydrol.2014.02.035
|View full text |Cite
|
Sign up to set email alerts
|

Parallel computation of a dam-break flow model using OpenMP on a multi-core computer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
24
0
1

Year Published

2014
2014
2022
2022

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 45 publications
(27 citation statements)
references
References 21 publications
2
24
0
1
Order By: Relevance
“…So the key to a parallel implementation based on shared memory is to determine the main loops that can be run separately in the model. OpenMP parallel instructions [38,39] can then be used to allocate these uncorrelated units to different processors, decreasing the calculation time. In our model, because the simulation is iterated over a series of time steps, it is most beneficial if all the calculation units in a time step can be run separately.…”
Section: Parallel Model Implementationmentioning
confidence: 99%
See 1 more Smart Citation
“…So the key to a parallel implementation based on shared memory is to determine the main loops that can be run separately in the model. OpenMP parallel instructions [38,39] can then be used to allocate these uncorrelated units to different processors, decreasing the calculation time. In our model, because the simulation is iterated over a series of time steps, it is most beneficial if all the calculation units in a time step can be run separately.…”
Section: Parallel Model Implementationmentioning
confidence: 99%
“…In our model, because the simulation is iterated over a series of time steps, it is most beneficial if all the calculation units in a time step can be run separately. In Conjugate Gradient Squared (CGS) [37], adding OpenMP instructions [38,39] to the circulation calculations in an iterative step can realize parallel computing. Different vector multiply vector calculations are distributed to different threads.…”
Section: Introductionmentioning
confidence: 99%
“…The parallel computation is put into practice in three ways: general-purpose computing on graphics processing units, via a message processing interface, or using shared memory architecture (OpenMP) [18]. In a shared memory model system, threads, which are processes such that each runs as parallel, are communicated each other by reading and writing directly to the same memory.…”
Section: Implementation Of the Parallel Directivesmentioning
confidence: 99%
“…The LISFLOOD-FP explicit storage cell model was parallelized using OpenMP [13], and Neal et al [14] tested the OpenMP version of the LISFLOOD-FP for a range of test cases with grid sizes that varied from 3 km to 3 m. The latter study showed that such test cases had a significant effect on acceleration. Zhang et al [15] applied OpenMP to the parallel computation of the dam-break model and gained a speedup factor of 8.64× on a 16-core computer. In the MPI parallel applications, TRENT [16,17], CalTWiMS [18], RMA [19], and Oger et al [20] used MPI and regional decomposition in the parallel simulation cases.…”
Section: Introductionmentioning
confidence: 99%