2005
DOI: 10.1147/rd.492.0393
|View full text |Cite
|
Sign up to set email alerts
|

Design and implementation of message-passing services for the Blue Gene/L supercomputer

Abstract: The Blue Genet/L (BG/L) supercomputer, with 65,536 dualprocessor compute nodes, was designed from the ground up to support efficient execution of massively parallel message-passing programs. Part of this support is an optimized implementation of the Message Passing Interface (MPI), which leverages the hardware features of BG/L. MPI for BG/L is implemented on top of a more basic message-passing infrastructure called the message layer. This message layer can be used both to implement other higher-level libraries… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
39
0

Year Published

2005
2005
2009
2009

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 47 publications
(40 citation statements)
references
References 18 publications
1
39
0
Order By: Relevance
“…This scaling plot shows that use of the BG/L ADE SPI communications interfaces allows continued performance gains to values of atoms per node well below those achievable using MPI. The MPI implementation on Blue Gene/L [20] is quite good as evidenced by the results achieved on the 3D-FFT [22], but the scalability of Blue Matter using MPI appears to be limited by the performance of the neighborhood broadcast and reduce collectives discussed above as can be seen in Table 2. Table 2.…”
Section: Performance Resultsmentioning
confidence: 96%
See 1 more Smart Citation
“…This scaling plot shows that use of the BG/L ADE SPI communications interfaces allows continued performance gains to values of atoms per node well below those achievable using MPI. The MPI implementation on Blue Gene/L [20] is quite good as evidenced by the results achieved on the 3D-FFT [22], but the scalability of Blue Matter using MPI appears to be limited by the performance of the neighborhood broadcast and reduce collectives discussed above as can be seen in Table 2. Table 2.…”
Section: Performance Resultsmentioning
confidence: 96%
“…We have implemented the second and third options and have found that as a result of optimizations of the MPI collectives for BG/L [20], the third option gives superior performance. Even so, the realized performance on MPI does not yet reflect the full capabilities of the hardware.…”
Section: Parallelization Strategies and Challengesmentioning
confidence: 99%
“…The Pthreads library is used to spawn multiple UPC threads on systems with SMP nodes. Implemented messaging methods include TCP/IP sockets, LAPI [23], Myrinet/GM transport [19] and the BlueGene/L messaging framework [1].…”
Section: The Ibm Xlupc Runtimementioning
confidence: 99%
“…The current implementation of MPI on BG/L [17] is based on MPICH2 [5] from Argonne National Laboratory. The BG/L version is MPI-1.2 compliant [15] and supports a subset of the MPI-2 standard. There are parts of MPI-2, such as dynamic process management, that are not supported.…”
Section: Mpi On Bg/lmentioning
confidence: 99%