2016
DOI: 10.1145/2894746
|View full text |Cite
|
Sign up to set email alerts
|

X10 and APGAS at Petascale

Abstract: X10 is a high-performance, high-productivity programming language aimed at large-scale distributed and shared-memory parallel applications. It is based on the Asynchronous Partitioned Global Address Space (APGAS) programming model, supporting the same fine-grained concurrency mechanisms within and across shared-memory nodes.We demonstrate that X10 delivers solid performance at petascale by running (weak scaling) eight application kernels on an IBM Power 775 supercomputer utilizing up to 55,680 Power7 cores (fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 26 publications
0
10
0
Order By: Relevance
“…On the other hand, new complete languages such as X10 [29], ECL [33], UPC [21], Legion [3], and Chapel [4] have been defined by exploiting in them a data-centric approach. Furthermore, new APIs based on a revolutionary approach, such as GA [20] and SHMEM [19], have been implemented according to a library-based model.…”
Section: Exascale Programming Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, new complete languages such as X10 [29], ECL [33], UPC [21], Legion [3], and Chapel [4] have been defined by exploiting in them a data-centric approach. Furthermore, new APIs based on a revolutionary approach, such as GA [20] and SHMEM [19], have been implemented according to a library-based model.…”
Section: Exascale Programming Systemsmentioning
confidence: 99%
“…Although not many cloud-based data analysis frameworks are available today for end users, within a few years they will become common [29]. Some current solutions are based on open source systems, such as Apache Hadoop and Mahout, Spark and SciDB, while others are proprietary solutions provided by companies such as Google, Microsoft, EMC, Amazon, BigML, Splunk Hunk, and InsightsOne.…”
Section: Introductionmentioning
confidence: 99%
“…Parallel schedulers aim to minimize the completion time of a parallel computation. Many results have been shown for parallel scheduling, including on runtime [Acar et al , 2013Arora et al 2001;Blumofe et al 1996;Blumofe and Leiserson 1999;Brent 1974;Burton and Sleep 1981;Eager et al 1989;Frigo et al 1998;Greiner and Blelloch 1999;Halstead 1985;Tardieu et al 2014;Ullman 1975], space usage Blumofe and Leiserson 1998;Narlikar and Blelloch 1999], cache utility [Acar et al 2002;Blelloch et al 2011;Blelloch and Gibbons 2004;Chowdhury and Ramachandran 2008], and granularity control . Nearly all of these schedulers implement the greedy scheduling principle [Brent 1974;Eager et al 1989], which requires keeping processors as busy as possible, but differ in the precise assignments of tasks to processors.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, the use of third-party libraries such as netCDF [16] requires the creation of X10 wrapper classes to translate between X10 and C++ objects. Finally, a high performance communication backend is only available on clusters using the PAMI interconnect, otherwise an MPI translation layer is used [17].…”
Section: Motivation and Related Workmentioning
confidence: 99%