Over the past four years, the Big Data and Exascale Computing (BDEC) project organized a series of five international workshops that aimed to explore the ways in which the new forms of data-centric discovery introduced by the ongoing revolution in high-end data analysis (HDA) might be integrated with the established, simulation-centric paradigm of the high-performance computing (HPC) community. Based on those meetings, we argue that the rapid proliferation of digital data generators, the unprecedented growth in the volume and diversity of the data they generate, and the intense evolution of the methods for analyzing and using that data are radically reshaping the landscape of scientific computing. The most critical problems involve the logistics of wide-area, multistage workflows that will move back and forth across the computing continuum, between the multitude of distributed sensors, instruments and other devices at the networks edge, and the centralized resources of commercial clouds and HPC centers. We suggest that the prospects for the future integration of technological infrastructures and research ecosystems need to be considered at three different levels. First, we discuss the convergence of research applications and workflows that establish a research paradigm that combines both HPC and HDA, where ongoing progress is already motivating efforts at the other two levels. Second, we offer an account of some of the problems involved with creating a converged infrastructure for peripheral environments, that is, a shared infrastructure that can be deployed throughout the network in a scalable manner to meet the highly diverse requirements for processing, communication, and buffering/storage of massive data workflows of many different scientific domains. Third, we focus on some opportunities for software ecosystem convergence in big, logically centralized facilities that execute large-scale simulations and models and/or perform large-scale data analytics. We close by offering some conclusions and recommendations for future investment and policy review.
Abstract. The simulation of sedimentary basins aims at reconstructing its historical evolution in order to provide quantitative predictions about phenomena leading to hydrocarbon accumulations. The kernel of this simulation is the numerical solution of a complex system of non-linear partial differential equations (PDE) of mixed parabolic-hyperbolic type in 3D. A discretisation and linearisation of this system leads to very large, ill-conditioned, non-symmetric systems of linear equations with three unknowns per mesh cell, i.e. pressure, geostatic load, and oil saturation. This article describes the parallel version of a preconditioner for these systems, presented in its sequential form in [7]. It consists of three steps: in the first step a local decoupling of the pressure and saturation unknowns aims at concentrating in the "pressure block" the elliptic part of the system which is then, in the second step, preconditioned by AMG. The third step finally consists in recoupling the equations. Each step is efficiently parallelised using a partitioning of the domain into vertical layers along the y-axis and a distributed memory model within the PETSc library (Argonne National Laboratory, IL). The main new ingredient in the parallel version is a parallel AMG preconditioner for the pressure block, for which we use the BoomerAMG implementation in the hypre library [4]. Numerical results on real case studies, exhibit (i) a significant reduction of CPU times, up to a factor 5 with respect to a block Jacobi preconditioner with an ILU(0) factorisation of each block, (ii) robustness with respect to heterogeneities, anisotropies and high migration ratios, and (iii) a speedup of up to 4 on 8 processors.
We have performed the first-ever numerical Nbody simulation of the full observable universe (DEUS "Dark Energy Universe Simulation" FUR "Full Universe Run"). This has evolved 550 billion particles on an Adaptive Mesh Refinement grid with more than two trillion computing points along the entire evolutionary history of the universe and across 6 order of magnitudes length scales, from the size of the Milky Way to that of the whole observable Universe. To date, this is the largest and most advanced cosmological simulation ever run. It provides unique information on the formation and evolution of the largest structure in the universe and an exceptional support to future observational programs dedicated to mapping the distribution of matter and galaxies in the universe. The simulation has run on 4752 (of 5040) thin nodes of BULL supercomputer CURIE, using more than 300 TB of memory for 10 million hours of computing time. About 50 PBytes of data were generated throughout the run. Using an advanced and innovative reduction workflow the amount of useful stored data has been reduced to 500 TBytes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.