No abstract
This paper describes the design and implementation of a scalable run-time system and an optimizing compiler for Unified Parallel C (UPC). An experimental evaluation on BlueGene/L R , a distributed-memory machine, demonstrates that the combination of the compiler with the runtime system produces programs with performance comparable to that of efficient MPI programs and good performance scalability up to hundreds of thousands of processors.Our runtime system design solves the problem of maintaining shared object consistency efficiently in a distributed memory machine. Our compiler infrastructure simplifies the code generated for parallel loops in UPC through the elimination of affinity tests, eliminates several levels of indirection for accesses to segments of shared arrays that the compiler can prove to be local, and implements remote update operations through a lowercost asynchronous message. The performance evaluation uses three well-known benchmarks -HPC RandomAccess, HPC STREAM and NAS CG -to obtain scaling and absolute performance numbers for these benchmarks on up to 131072 processors, the full BlueGene/L machine. These results were used to win the HPC Challenge Competition at SC05 in Seattle WA, demonstrating that PGAS languages support both productivity and performance.
ata centers (DCs) are currently the largest closedloop systems in the information technology (IT) and networking worlds, continuously growing toward multi-million-node clouds [1]. DC operators manage and control converged IT and network infrastructures in order to offer a broad range of services and applications to their customers. Typical services and applications provided by current DCs range from traditional IT resource outsourcing (storage, remote desktop, disaster recovery, etc.) to a plethora of web applications (e.g., browsers, social networks, online gaming). Innovative applications and services are also gaining momentum to the point that they will become main representatives of future DC workloads. Among them, we can find high-performance computing (HPC) and big data applications [2]. HPC encompasses a broad set of computationally intensive scientific applications, aiming to solve highly complex problems in the areas of quantum mechanics, molecular modeling, oil and gas exploration, and so on. Big data applications target the analysis of massive amounts of data collected from people on the Internet to analyze and predict their behavior.All these applications and services require huge data exchanges between servers inside the DC, supported over the DC network (DCN): the intra-DC communication network. The DCN must provide ultra-large capacity to ensure high throughput between servers. Moreover, very low latencies are mandatory, particularly in HPC where parallel computing tasks running concurrently on multiple servers are tightly interrelated. Unfortunately, current multi-tier hierarchical tree-based DCN architectures relying on Ethernet or Infiniband electronic switches suffer from bandwidth bottlenecks, high latencies, manual operation, and poor scalability to meet the expected DC growth forecasts [3].These limitations have mandated a renewed investigation D Abstract Applications running inside data centers are enabled through the cooperation of thousands of servers arranged in racks and interconnected together through the data center network. Current DCN architectures based on electronic devices are neither scalable to face the massive growth of DCs, nor flexible enough to efficiently and cost-effectively support highly dynamic application traffic profiles. The FP7 European Project LIGHTNESS foresees extending the capabilities of today's electrical DCNs through the introduction of optical packet switching and optical circuit switching paradigms, realizing together an advanced and highly scalable DCN architecture for ultra-high-bandwidth and low-latency server-to-server interconnection. This article reviews the current DC and high-performance computing (HPC) outlooks, followed by an analysis of the main requirements for future DCs and HPC platforms. As the key contribution of the article, the LIGHTNESS DCN solution is presented, deeply elaborating on the envisioned DCN data plane technologies, as well as on the unified SDN-enabled control plane architectural solution that will empower OPS and OCS transm...
Programming for large-scale, multicore-based architectures requires adequate tools that offer ease of\ud programming and do not hinder application performance. StarSs is a family of parallel programming models\ud based on automatic function-level parallelism that targets productivity. StarSs deploys a data-flow model: it\ud analyzes dependencies between tasks and manages their execution, exploiting their concurrency as much\ud as possible.\ud This paper introduces Cluster Superscalar (ClusterSs), a new StarSs member designed to execute on\ud clusters of SMPs (Symmetric Multiprocessors). ClusterSs tasks are asynchronously created and assigned\ud to the available resources with the support of the IBM APGAS runtime, which provides an efficient and\ud portable communication layer based on one-sided communication.\ud We present the design of ClusterSs on top of APGAS, as well as the programming model and\ud execution runtime for Java applications. Finally, we evaluate the productivity of ClusterSs, both in terms\ud of programmability and performance and compare it to that of the IBM X10 languagePeer ReviewedPostprint (published version
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.