2010
DOI: 10.1016/j.parco.2009.12.009
|View full text |Cite
|
Sign up to set email alerts
|

Performance of parallel AMG-preconditioners in CFD-codes for weakly compressible flows

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
22
0

Year Published

2010
2010
2015
2015

Publication Types

Select...
6
2

Relationship

4
4

Authors

Journals

citations
Cited by 23 publications
(23 citation statements)
references
References 18 publications
1
22
0
Order By: Relevance
“…5 On the other hand, AGMG appears roughly three times faster than hypre. This is in line with the more detailed comparison developed in [24] (see also [8,10]), which further displays the sensitivity of hypre to the many available options and parameters: tuning these at best for the problem at hand allows, in the sequential case, to reduce the penalty to a factor of about two.…”
Section: Problems Specification Reported Data and Tested Architecturessupporting
confidence: 82%
See 2 more Smart Citations
“…5 On the other hand, AGMG appears roughly three times faster than hypre. This is in line with the more detailed comparison developed in [24] (see also [8,10]), which further displays the sensitivity of hypre to the many available options and parameters: tuning these at best for the problem at hand allows, in the sequential case, to reduce the penalty to a factor of about two.…”
Section: Problems Specification Reported Data and Tested Architecturessupporting
confidence: 82%
“…In [27], promising numerical results are reported on a moderate size Intel cluster (with up to 48 nodes). However, the conclusions in [27] are to be toned down: on the one hand, the comparison made in [10] shows that aggregation-based AMG methods are faster than other AMG methods sequentially or on few processors, but may become slower as the number of processors increases; on the other hand, the results reported in [6] for a related method show that, on massively parallel systems, the scalability may be not fully satisfactory.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The significant increase of iterations of SA with increased degree of parallelism is not related to the coarse-grid treatment. It is a consequence of the parallelisation of the aggregation that is strictly local with respect to the domain assigned to each processors such that the parallel aggregation generally leads to less favourable schemes, in particular on the coarser grids; since SA uses significantly larger aggregates than PA and KC, this algorithm is more sensitive to this, see Emans [24]. The behaviour is qualitatively the same in cases A and B.…”
Section: Tablementioning
confidence: 97%
“…In our parallel implementation the aggregation and the smoothing are strictly local to the processes; i.e., aggregates are not intersected by domain boundaries, and smoothing is done only between the interior points of each process. For further details of the parallel aggregation process and a justification for our choice, we refer the reader to Emans [7]. The resulting interpolation of ams1cg is therefore also local to each process as is that of amggs2.…”
Section: Smoothed Aggregation Amg: Ams1cgmentioning
confidence: 99%