Proceedings of the ACM/IEEE SC2004 Conference
DOI: 10.1109/sc.2004.8
|View full text |Cite
|
Sign up to set email alerts
|

A Performance and Scalability Analysis of the BlueGene/L Architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(25 citation statements)
references
References 2 publications
0
25
0
Order By: Relevance
“…Details on ASCI Q (Compaq) performance as well as how SAGE was actually used in the optimization of the system performance is described in [9]. The performance of Lightning (Linux) has been described in [10], and early details on the performance of Blue Gene/L (IBM) is given in [11].…”
Section: Parallel Scaling Performancementioning
confidence: 99%
“…Details on ASCI Q (Compaq) performance as well as how SAGE was actually used in the optimization of the system performance is described in [9]. The performance of Lightning (Linux) has been described in [10], and early details on the performance of Blue Gene/L (IBM) is given in [11].…”
Section: Parallel Scaling Performancementioning
confidence: 99%
“…The techniques described by Petrini and provided by the HPCC benchmarking suite have been used to describe the network (and overall system) performance of many emerging supercomputer installations, including the Cray XT4 and Blue Gene/P at Oak Ridge National Laboratory [1], [2], the Blue Gene/L systems at Argonne and Lawrence Livermore National Laboratory [5], and the Roadrunner system at Los Alamos National Laboratory [3]. The HPCC benchmark is a prototypical example of a benchmarking suite that reports results as summary statistics.…”
Section: A Related Workmentioning
confidence: 99%
“…While this restricts the programming and usage model of applications on BG/L, it has the benefit that applications can not be interrupted by background jobs or daemons. As a direct consequence, the machine is virtually noise free (Davis, Hoisie, Johnson, Kerbyson, Lang, Pakin, and Petrini 2004) This daemon manages the communication between the front end and compute nodes, executes system call requests from the compute nodes, and provides I/O access to applications. In addition, the CIOD allows tools to start and to control one additional I/O node daemon, which in turn can communicate with the CIOD and control the compute nodes using a proprietary debugging interface provided by the CIOD.…”
Section: Bg/l System Softwarementioning
confidence: 99%