We consider optimal-scaling multigrid solvers for the linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integration techniques is limited to spatial parallelism. However, current trends in computer architectures are leading towards systems with more, but not faster, processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic operators to this setting is not straightforward. In this paper, we present a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction (MGR). We demonstrate optimal-ity of our multigrid-reduction-in-time algorithm (MGRIT) for solving diffusion equations in two and three space dimensions in numerical experiments. Furthermore, through both parallel performance models and actual parallel numerical results, we show that we can achieve significant speedup in comparison to sequential time marching on modern architectures. 1. Introduction. One of the major challenges facing the computational science community with future architectures is that faster compute speeds must come from increased concurrency, since clock speeds are no longer increasing but core counts are going up sharply. As a consequence, traditional time marching is becoming a huge sequential bottleneck in time integration simulations in the following way: improving simulation accuracy by scaling up the spatial resolution requires a similar (or greater) increase in the temporal resolution, which is also required to maintain stability in explicit methods. As a result, numerical time integration involves many more time steps leading to long overall compute times, since parallelizing only in space limits concurrency. Solving for multiple time steps in parallel and, therefore, increasing concurrency would remove this time integration bottleneck. Because time is sequential in nature, the idea of simultaneously solving for multiple time steps is not intuitive. Yet it is possible, with work on this topic going back to as early as 1964 [33]. However, most research on this subject has been done within the past 30 years including [2, 7-10, 14-22, 25, 28, 31, 32, 38, 40-44]. One approach to achieve parallelism in time is with multigrid methods. The parareal in time method, introduced by Lions, Maday, and Turinici in [25], can be interpreted as a two-level multigrid method [16], even though the leading idea came from a spatial domain decomposition approach. The algorithm is optimal, but concurrency is limited since the coarse-grid solve is still sequential. Considering true multilevel (not two-level) schemes, only a few methods exhibit full multigrid optimality and concurrency such as [21, 42,43], and most are designed for specific problems or discretizations. Furt...
Abstract. We consider optimal-scaling multigrid solvers for the linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integration techniques is limited to spatial parallelism. However, current trends in computer architectures are leading towards systems with more, but not faster, processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic operators to this setting is not straightforward. In this paper, we present a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction (MGR). We demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving diffusion equations in two and three space dimensions in numerical experiments. Furthermore, through both parallel performance models and actual parallel numerical results, we show that we can achieve significant speedup in comparison to sequential time marching on modern architectures.Key words. parabolic problems, reduction-based multigrid, multigrid-in-time, parareal AMS subject classifications. 65F10, 65M22, 65M551. Introduction. One of the major challenges facing the computational science community with future architectures is that faster compute speeds must come from increased concurrency, since clock speeds are no longer increasing but core counts are going up sharply. As a consequence, traditional time marching is becoming a huge sequential bottleneck in time integration simulations in the following way: improving simulation accuracy by scaling up the spatial resolution requires a similar (or greater) increase in the temporal resolution, which is also required to maintain stability in explicit methods. As a result, numerical time integration involves many more time steps leading to long overall compute times, since parallelizing only in space limits concurrency. Solving for multiple time steps in parallel and, therefore, increasing concurrency would remove this time integration bottleneck.Because time is sequential in nature, the idea of simultaneously solving for multiple time steps is not intuitive. Yet it is possible, with work on this topic going back to as early as 1964 [33]. However, most research on this subject has been done within the past 30 years including [2, 7-10, 14-22, 25, 28, 31, 32, 38, 40-44]. One approach to achieve parallelism in time is with multigrid methods. The parareal in time method, introduced by Lions, Maday, and Turinici in [25], can be interpreted as a two-level multigrid method [16], even though the leading idea came from a spatial domain decomposition approach. The algorithm is optimal, but concurrency is limited since the coarse-grid solve is still sequential. Considering true multilevel (not two-level) schemes, only a few method...
Abstract. This paper investigates the properties of smoothers in the context of algebraic multigrid (AMG) running on parallel computers with potentially millions of processors. The development of multigrid smoothers in this case is challenging, because some of the best relaxation schemes, such as the Gauss-Seidel (GS) algorithm, are inherently sequential. Based on the sharp two-grid multigrid theory from [22,23] we characterize the smoothing properties of a number of practical candidates for parallel smoothers, including several C-F , polynomial, and hybrid schemes. We show, in particular, that the popular hybrid GS algorithm has multigrid smoothing properties which are independent of the number of processors in many practical applications, provided that the problem size per processor is large enough. This is encouraging news for the scalability of AMG on ultra-parallel computers. We also introduce the more robust 1 smoothers, which are always convergent and have already proven essential for the parallel solution of some electromagnetic problems [29].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.