Abstract. Accurately modelling the contribution of Greenland and Antarctica to sea level rise requires solving partial differential equations at a high spatial resolution. In this paper, we discuss the scaling of the Ice-sheet and Sea-level System Model (ISSM) applied to the Greenland Ice Sheet with horizontal grid resolutions varying between 10 and 0.25 km. The model setup used as benchmark problem comprises a variety of modules with different levels of complexity and computational demands. The core builds the so-called stress balance module, which uses the higher-order approximation (or Blatter–Pattyn) of the Stokes equations, including free surface and ice-front evolution as well as thermodynamics in form of an enthalpy balance, and a mesh of linear prismatic finite elements, to compute the ice flow. We develop a detailed user-oriented, yet low-overhead, performance instrumentation tailored to the requirements of Earth system models and run scaling tests up to 6144 Message Passing Interface (MPI) processes. The results show that the computation of the Greenland model scales overall well up to 3072 MPI processes but is eventually slowed down by matrix assembly, the output handling and lower-dimensional problems that employ lower numbers of unknowns per MPI process. We also discuss improvements of the scaling and identify further improvements needed for climate research. The instrumented version of ISSM thus not only identifies potential performance bottlenecks that were not present at lower core counts but also provides the capability to continually monitor the performance of ISSM code basis. This is of long-term significance as the overall performance of ISSM model depends on the subtle interplay between algorithms, their implementation, underlying libraries, compilers, runtime systems and hardware characteristics, all of which are in a constant state of flux. We believe that future large-scale high-performance computing (HPC) systems will continue to employ the MPI-based programming paradigm on the road to exascale. Our scaling study pertains to a particular modelling setup available within ISSM and does not address accelerator techniques such as the use of vector units or GPUs. However, with 6144 MPI processes, we identified issues that need to be addressed in order to improve the ability of the ISSM code base to take advantage of upcoming systems that will require scaling to even higher numbers of MPI processes.
Abstract. Accurately modeling the contribution of Greenland and Antarctica to sea level rise requires to solve partial differential equations at a high spatial resolution. It is important to test the scalability of existing ice sheet models in order to assess whether they are ready to take advantage of new cluster architectures. In this paper, we discuss the overall scaling of the Ice-sheet and Sea-level System Model (ISSM) applied to the Greenland ice sheet. The model setup used as benchmark problem comprises a variety of modules with different levels of complexity and computational demands. The core builds the so-called stress balance module, which uses the higher-order approximation (or Blatter-Pattyn) of the Stokes equations and a mesh of linear prismatic finite elements to compute the ice flow. We develop a detailed user-oriented, yet low-overhead performance instrumentation tailored to the requirements of earth system models and run scaling tests up to 6 144 MPI processes. The results show that the computation of the Greenland model scales overall well up to 3 072 MPI processes, but is eventually slowed down by matrix assembly, the output handling, and lower-dimensional problems that employ lower numbers of unknowns per MPI process. We also discuss improvements of the scaling and identify further improvements needed for climate research. The instrumented version of ISSM, thus, not only identifies potential performance bottlenecks that were not present at lower core counts but also provides the capability to continually monitor the performance of ISSM code basis. This is of long-term significance as the overall performance of ISSM model depends on the subtle interplay between algorithms, their implementation, underlying libraries, compilers, run-time systems and hardware characteristics, all of which are in a constant state of flux.
Abstract. The subglacial hydrological system affects the motion of ice sheets, the ocean circulation by freshwater discharge, as well as marginal lakes and rivers. For modelling this system a porous medium model has been developed, representing a confined-unconfined aquifer system (CUAS) with evolving transmissivity. To allow for realistic simulations, we developed CUAS-MPI, an MPI-parallel C/C++ implementation, which employs the PETSc infrastructure for handling grids and equation systems. We describe the CUAS model and our software design and validate the numerical result of a pumping test using5 analytical solutions. We then investigate the scaling behavior of CUAS-MPI and show, that CUAS-MPI scales up to 3840 MPI processes running a realistic Greenland setup. Our measurements show that CUAS-MPI reaches a throughput comparable to the throughput of ice sheet simulations, e.g. the Ice-sheet and Sea-level System Model (ISSM). Lastly, we discuss opportunities for ice-sheet modelling, future coupling possibilities of CUAS-MPI with other simulations, and consider throughput bottlenecks and limits of further scaling.
<p>Simulating the hydrological systems underneath ice sheets and glaciers is important for estimating the freshwater flux into the ocean as well as inferring the characteristics of the hydrological system and its impact on ice sheet dynamics. In particular, simulations of the subglacial hydrological system in high temporal and spatial resolution and coupled to ice sheet models are needed to investigate the formation of ice streams. In order to be able to run simulations efficiently, both codes need to be parallelised. To this end, we present our approach for a parallelised version of the confined-unconfined aquifer system (CUAS) model (Beyer et al., 2018) that was established as a python code. CUAS is simulating an effective porous medium layer, in which the transmissivity indicates if the flow is channelised. Transmissivity is evolving by melt, creep and cavity opening. A fully implicit finite difference scheme is used for the hydraulic head while an explicit Euler time step is used for the transmissivity.&#160;</p><p>The new CUAS-MPI version is written in C++ and instrumented for performance measurements. The parallelisation is done with MPI, where we take advantage of PETSc data structures and linear equation system solvers. The code has been designed to be coupled to the Ice Sheet and Sea-level system Model (ISSM) using preCICE (precice.org).&#160;</p><p>Pumping tests that are widely used in applied groundwater hydrology are performed to test the model implementation including the boundary conditions and to compare with the analytical solutions. We further present test applications to the Greenland Ice Sheet, with the major focus on performance, rather than on characteristics of the hydrological system.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.