2001
DOI: 10.1016/s0045-7949(00)00153-x
|View full text |Cite
|
Sign up to set email alerts
|

MPI-based implementation of a PCG solver using an EBE architecture and preconditioner for implicit, 3-D finite element analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
26
0

Year Published

2003
2003
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 37 publications
(26 citation statements)
references
References 39 publications
0
26
0
Order By: Relevance
“…Once each processor has the updated version of the diagonal terms for all of its nodes, the preconditioning calculation can be performed independently on each domain. For other types of preconditioners, such as element-by-element [23] or incomplete Choleski factorizations [24], the parallel implementation would be more involved; see Reference [25] for an example.…”
Section: Preconditioner Computationmentioning
confidence: 99%
“…Once each processor has the updated version of the diagonal terms for all of its nodes, the preconditioning calculation can be performed independently on each domain. For other types of preconditioners, such as element-by-element [23] or incomplete Choleski factorizations [24], the parallel implementation would be more involved; see Reference [25] for an example.…”
Section: Preconditioner Computationmentioning
confidence: 99%
“…It is worth noting here that MPI based parallel finite element approaches [17,18] have also been developed for implicit nonlinear dynamic analysis utilising linear preconditioned conjugate gradient (PCG) solvers for the iterative analysis as opposed to the frontal solver, conventionally considered to be direct solution method [19]. The use of PCG solvers requires significant modification of existing finite element programs as most of these utilise direct solvers based on Gaussian elimination.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, unlike the previous approaches [17,81], neighbouring partitions in the present approach do not exchange boundary displacement/force entities directly; instead information flow is controlled by the parent coordinator, which is responsible for ensuring compatibility and equilibrium at the partition boundaries. A clear and natural extension of the proposed approach is hierarchic multi-level partitioning, which cannot be accommodated by previous approaches [17,18], and which maps readily to hierarchic HPC architecture with ensuing reduction in inter-processor communication overheads.…”
Section: Introductionmentioning
confidence: 99%
“…Also, a large number of EBE based preconditioners is available in literature. We may cite the works of Bova and Carey [12] and Gullerud and Dodds [13] as examples of MPI-based SBS implementations using EBE schemes. Okuda et al [25] presents an EBE scheme for massively parallel computers.…”
Section: Introductionmentioning
confidence: 99%