2013
DOI: 10.2172/1089988
|View full text |Cite
|
Sign up to set email alerts
|

Toward a new metric for ranking high performance computing systems.

Abstract: The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications.In this paper we describe a new high performance conjugate gradient (HPCG) benchmark. HPCG is composed of computations and data access patterns more commonly found in applications. Using HPCG we stri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
33
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 90 publications
(33 citation statements)
references
References 3 publications
0
33
0
Order By: Relevance
“…We are working with the community to gradually unify existing techniques and tools including pragma-based source-to-source transformations [41,80], plugin-based GCC and LLVM to expose and tune all internal optimization decisions [30,31]; polyhedral source-to-source transformation tools [12]; differential analysis to detect performance anomalies and CPU/memory bounds [28,36]; just-in-time compilation for Android Dalvik or Oracle JDK; algorithmlevel tuning [3]; techniques to balance communication and computation in numerical codes particularly for heterogeneous architectures [7,75]; Scalasca framework to automate analysis and modeling of scalability of HPC applications [13,40]; LIKWID for lightweight collection of hardware counters [76]; HPCC and HPCG benchmarks to collaboratively rank HPC systems [42,56]; benchmarks from GCC and LLVM, TAU performance tuning framework [68]; and all recent Periscope application tuning plugins [10,60].…”
Section: Discussionmentioning
confidence: 99%
“…We are working with the community to gradually unify existing techniques and tools including pragma-based source-to-source transformations [41,80], plugin-based GCC and LLVM to expose and tune all internal optimization decisions [30,31]; polyhedral source-to-source transformation tools [12]; differential analysis to detect performance anomalies and CPU/memory bounds [28,36]; just-in-time compilation for Android Dalvik or Oracle JDK; algorithmlevel tuning [3]; techniques to balance communication and computation in numerical codes particularly for heterogeneous architectures [7,75]; Scalasca framework to automate analysis and modeling of scalability of HPC applications [13,40]; LIKWID for lightweight collection of hardware counters [76]; HPCC and HPCG benchmarks to collaboratively rank HPC systems [42,56]; benchmarks from GCC and LLVM, TAU performance tuning framework [68]; and all recent Periscope application tuning plugins [10,60].…”
Section: Discussionmentioning
confidence: 99%
“…However, Linpack only reflects one aspect of computing platforms. That is why other tests were suggested later which became the basis for the Graph500 [11] and HPCG [12] benchmarks. All three ratings use the same technique: a basic algorithm is chosen and its software implementation is written and executed on each computing system in question, which results in a number that is used to judge the computer's properties.…”
Section: The Algowiki Project and Top500 Methodologymentioning
confidence: 99%
“…In this work, we demonstrate such an optimization for a data mining algorithm which solves regression and classification problems on vast data sets. Finally, the proposal of having an additional ranking of the Top500 list machines (like the Green500 [13] list with respect to power consumption) based on a high-performance CG (HPCG) implementation was recently made [14].In this work, we apply the idea of an HPC benchmark to a full and relevant application, classification and regression of vast data sets. By processing data sets ranging from several hundreds of thousands instances to multi-million data points in strong-scaling and weak-scaling settings, we are able to estimate the amount of parallelism needed to unleash the performance of classic CPU-based machines and clusters employing Intel Xeon Phi coprocessors and NVIDIA Kepler GPUs.…”
mentioning
confidence: 99%
“…In case of accelerated clusters, the scalable heterogeneous computing benchmark suite [10] is a good candidate which implements nearly all NAS benchmarks in OpenCL and CUDA, and can be easily executed on accelerators and GPUs. Finally, the proposal of having an additional ranking of the Top500 list machines (like the Green500 [13] list with respect to power consumption) based on a high-performance CG (HPCG) implementation was recently made [14]. There, the benchmarks are not limited to kernels; they are simplified versions of real simulation codes stemming from several application domains.…”
mentioning
confidence: 99%
See 1 more Smart Citation