Abstract. A large number of MPI implementations are currently available, each of which emphasize different aspects of high-performance computing or are intended to solve a specific research problem. The result is a myriad of incompatible MPI implementations, all of which require separate installation, and the combination of which present significant logistical challenges for end users. Building upon prior research, and influenced by experience gained from the code bases of the LAM/MPI, LA-MPI, and FT-MPI projects, Open MPI is an all-new, productionquality MPI-2 implementation that is fundamentally centered around component concepts. Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI. Its component architecture provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons. This paper presents a high-level overview the goals, design, and implementation of Open MPI.
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell-and GPU-accelerated systems, standard multicore node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
We present a set of ultra-large particle-mesh simulations of the Lyman-α forest targeted at understanding the imprint of baryon acoustic oscillations (BAO) in the inter-galactic medium. We use 9 dark matter only simulations which can, for the first time, simultaneously resolve the Jeans scale of the intergalactic gas while covering the large volumes required to adequately sample the acoustic feature. Mock absorption spectra are generated using the fluctuating Gunn-Peterson approximation which have approximately correct flux probability density functions (PDFs) and small-scale power spectra. On larger scales there is clear evidence in the redshift space correlation function for an acoustic feature, which matches a linear theory template with constant bias. These spectra, which we make publicly available, can be used to test pipelines, plan future experiments and model various physical effects. As an illustration we discuss the basic properties of the acoustic signal in the forest, the scaling of errors with noise and source number density, modified statistics to treat mean flux evolution and mis-estimation, and non-gravitational sources such as fluctuations in the photo-ionizing background and temperature fluctuations due to HeII reionization. Subject headings: methods: N-body simulations -cosmology: large-scale structure of universe
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.