2010
DOI: 10.1007/s11227-010-0440-0
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic-CoMPI: dynamic optimization techniques for MPI parallel applications

Abstract: This work presents an optimization of MPI communications, called Dynamic-CoMPI, which uses two techniques in order to reduce the impact of communications and non-contiguous I/O requests in parallel applications. These techniques are independent of the application and complementaries to each other. The first technique is an optimization of the Two-Phase collective I/O technique from ROMIO, called Locality aware strategy for Two-Phase I/O (LA-Two-Phase I/O). In order to increase the locality of the file accesses… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 34 publications
0
6
0
Order By: Relevance
“…After Computing Phase Loop execution, synthetic application performs Communication Phase by executing Communication Phase Loop (lines 10 to 12) whose number of iterations is determined by input parameter β. This phase is based on MP Bench benchmark [16]. From all MPI operations executed by this benchmark, MPI Alltoall was particularly interesting for our work because it is widely used in scientific applications.…”
Section: Name Operation Bytes Per Iterationmentioning
confidence: 99%
“…After Computing Phase Loop execution, synthetic application performs Communication Phase by executing Communication Phase Loop (lines 10 to 12) whose number of iterations is determined by input parameter β. This phase is based on MP Bench benchmark [16]. From all MPI operations executed by this benchmark, MPI Alltoall was particularly interesting for our work because it is widely used in scientific applications.…”
Section: Name Operation Bytes Per Iterationmentioning
confidence: 99%
“…Dynamic-CoMPI [13] presents an optimization of MPI communications, called Dynamic-CoMPI, which uses different techniques in order to reduce the impact of communications and non-contiguous I/O requests in parallel applications. These techniques are independent of the application and complementary to each other.…”
Section: Related Workmentioning
confidence: 99%
“…This analysis consists in executing a synthetic benchmark for calculating the time required for compressing/decompressing data using different algorithms. In our previous work [13], we have found that a message has three main properties that affect the performance of compressing/decompressing data: message size; datatype of each message element; and redundancy level of message data.…”
Section: Obtaining Compression Informationmentioning
confidence: 99%
“…Systems overview and the other is run-time adaptive message compression. The benefit of LA-Two-Phase I/O was previously examined with PVFS2 as an underlying storage system [8].…”
Section: Designmentioning
confidence: 99%
“…We are using Dynamic-CoMPI [8] as a MPI-IO implementation and Papio [9] as a shared storage system, which implements parallel I/O and performance reservation. We are developing the ADIO layer to connect these systems and to evaluate the benefits of the reservation-based performance isolation approach.…”
Section: Introductionmentioning
confidence: 99%