We consider the iterative solution of large sparse symmetric positive definite linear systems. We present an algebraic multigrid method which has a guaranteed convergence rate for the class of nonsingular symmetric M-matrices with nonnegative row sum. The coarsening is based on the aggregation of the unknowns. A key ingredient is an algorithm that builds the aggregates while ensuring that the corresponding two-grid convergence rate is bounded by a user-defined parameter. For a sensible choice of this parameter, it is shown that the recursive use of the two-grid procedure yields a convergence independent of the number of levels, provided that one uses a proper AMLIcycle. On the other hand, the computational cost per iteration step is of optimal order if the mean aggregate size is large enough. This cannot be guaranteed in all cases but is analytically shown to hold for the model Poisson problem. For more general problems, a wide range of experiments suggests that there are no complexity issues and further demonstrates the robustness of the method. The experiments are performed on systems obtained from low order finite difference or finite element discretizations of second order elliptic partial differential equations (PDEs). The set includes twoand three-dimensional problems, with both structured and unstructured grids, some of them with local refinement and/or reentering corner, and possible jumps or anisotropies in the PDE coefficients.
We present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable representations (HSS). Such matrices appear in many applications, e.g., finite element methods, boundary element methods, etc. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, relies on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores.This work is part of a more global effort, the STRUMPACK (STRUctured Matrices PACKage) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.
We present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination, and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factorization leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite. The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel R Xeon Phi (MIC). The code is part of a software package called STRUMPACK -STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.
A convergence analysis of two-grid methods based on coarsening by (unsmoothed) aggregation is presented. For diagonally dominant symmetric (M-)matrices, it is shown that the analysis can be conducted locally; that is, the convergence factor can be bounded above by computing separately for each aggregate a parameter, which in some sense measures its quality. The procedure is purely algebraic and can be used to control a posteriori the quality of automatic coarsening algorithms. Assuming the aggregation pattern is sufficiently regular, it is further shown that the resulting bound is asymptotically sharp for a large class of elliptic boundary value problems, including problems with variable and discontinuous coefficients. In particular, the analysis of typical examples shows that the convergence rate is insensitive to discontinuities under some reasonable assumptions on the aggregation scheme.A. NAPOV AND Y. NOTAY Aggregation schemes are not new and trace back to [4,5]. They did not receive much attention till recently because of the difficulty to obtain grid independent convergence on their basis [6, p. 522-524], see also [7, p. 663], where an accurate three grid analysis is presented for the model Poisson problem. This may be related to the fact that the piecewise constant prolongation does not correspond to an interpolation which is at least first-order accurate, as required by the standard multigrid theory [2, Sections 3.5 and 6.3.2].That is why aggregation is often associated with smoothed aggregation, a procedure in which a tentative piecewise constant prolongation operator is smoothed [8,9]. This allows to develop an appropriate convergence theory, but, at the same time, some of the attractive features of pure (unsmoothed) aggregation are lost. In particular, assuming the same aggregation pattern, the coarse grid matrices are less sparse and more costly to compute when using smoothed aggregation.In this paper, we investigate such pure aggregation schemes based on the piecewise constant prolongation. They may indeed lead to two-grid methods with grid-independent convergence properties, as recently shown in [10] for model constant coefficient discrete partial differential equations (PDE) problems. There is no contradiction with the above quoted results, whose focus is on the convergence properties of two-grid methods used recursively in so-called V-cycle scheme [1]. Indeed, aggregation-based multigrid methods tend to scale poorly with the number of levels when using simple V-or even W-cycles, even though the two-grid scheme converges nicely [10,11]. However, this may be cured using more sophisticated K-cycles, in which Krylov subspace acceleration is used at each level [12]. It is also possible to improve the scalability by increasing the number of smoothing steps on coarser levels [13]. Now, the (Fourier) analysis developed in [10] only addresses constant coefficient problems with artificial (periodic) boundary conditions. Although there are numerical evidences that aggregationbased methods can be robust in th...
The paper considers the parallel implementation of an algebraic multigrid method. The sequential version is well suited to solve linear systems arising from the discretization of scalar elliptic PDEs. It is scalable in the sense that the time needed to solve a system is (under known conditions) proportional to the number of unknowns. The associate software code is also robust and often significantly faster than other algebraic multigrid solvers. The present work address the challenge of porting it on massively parallel computers. In this view, some critical components are redesigned, in a relatively simple yet not straightforward way. Thanks to this, excellent weak scalability results are obtained on three petascale machines among the most powerful today available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.