2007
DOI: 10.1016/j.pepi.2007.05.008
|View full text |Cite
|
Sign up to set email alerts
|

Toward an automated parallel computing environment for geosciences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 40 publications
0
7
0
Order By: Relevance
“…The software is developed by the Laboratory of Computational Geodynamics, Graduate University of Chinese Academy of Sciences [37] . We calculate the displacement, stress and strain produced by a dislocation in three-dimensional homogeneous elastic medium.…”
Section: Comparison Of Results Derived From Finite Element Model and mentioning
confidence: 99%
“…The software is developed by the Laboratory of Computational Geodynamics, Graduate University of Chinese Academy of Sciences [37] . We calculate the displacement, stress and strain produced by a dislocation in three-dimensional homogeneous elastic medium.…”
Section: Comparison Of Results Derived From Finite Element Model and mentioning
confidence: 99%
“…Benjemaa et al 2007; Brossier et al 2008), the finite‐element method (FEM) (e.g. Bao et al 1998; Aagaard et al 2001; Zhang et al 2007), the spectral‐element method (SEM) (e.g. Seriani & Priolo 1994; Faccioli et al 1997; Komatitsch & Vilotte 1998; Komatitsch & Tromp 1999; Cohen 2002; Chaljub et al 2007), which is a high‐order FEM using orthogonal Chebyshev or Legendre polynomial as the basis function, and the discontinuous‐Galerkin method (DG) (Etienne et al 2010) and the arbitrary high‐order derivatives discontinuous‐Galerkin method (ADER‐DG) (Kaser & Dumbser 2006; Dumbser & Kaser 2006), which are also high‐order FEM but use numerical fluxes instead of the basis functions to connect elements.…”
Section: Introductionmentioning
confidence: 99%
“…Computation and data intensive geoscience analytics are becoming prevalent. To improve scalability and performance, parallelization technologies are essential [ 29 ]. Traditionally, most parallel applications achieve fine grained parallelism using message passing infrastructures such as PVM [ 30 ] and MPI [ 31 ] executed on computer clusters, super computers, or grid infrastructures [ 32 ].…”
Section: Related Workmentioning
confidence: 99%