2012 41st International Conference on Parallel Processing 2012
DOI: 10.1109/icpp.2012.15
|View full text |Cite
|
Sign up to set email alerts
|

Added Concurrency to Improve MPI Performance on Multicore

Abstract: Abstract-MPI implementations typically equate an MPI process with an OS-process, resulting in a coarse-grain programming model where MPI processes are bound to the physical cores. FineGrain (FG-MPI) extends the MPICH2 implementation of MPI and implements an integrated runtime system to allow multiple MPI processes to execute concurrently inside an OS-process.FG-MPI's integrated approach makes it possible to add more concurrency than available parallelism, while minimizing the overheads related to context switc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Another work describes how, even if possibly harmful inside one application, oversubscribing can be used to efficiently execute multiple applications sharing one node [37]. To circumvent this drawback when applying MPI oversubscribing in a unique application, some work focused on enabling multiple MPI process in one OS process [26], verifying the positive impact of such implementation.…”
Section: Explicit Methodsmentioning
confidence: 99%
“…Another work describes how, even if possibly harmful inside one application, oversubscribing can be used to efficiently execute multiple applications sharing one node [37]. To circumvent this drawback when applying MPI oversubscribing in a unique application, some work focused on enabling multiple MPI process in one OS process [26], verifying the positive impact of such implementation.…”
Section: Explicit Methodsmentioning
confidence: 99%
“…Computer cluster can be considered as an implementation of parallel computing, which can be viewed as a single system in many respects, overcoming the limited performance of a single computer and improving performance. What is more, optimization of performance of cluster or parallel computing has never been stopped, such as using CUDA inter-process communication [1], adding concurrency [2] and using dynamic energy optimization method of network [3] to improve MPI performance on the cluster.…”
Section: Introductionmentioning
confidence: 99%