2012
DOI: 10.1137/100799411
|View full text |Cite
|
Sign up to set email alerts
|

On Computing Inverse Entries of a Sparse Matrix in an Out-of-Core Environment

Abstract: Abstract. The inverse of an irreducible sparse matrix is structurally full, so that it is impractical to think of computing or storing it. However, there are several applications where a subset of the entries of the inverse is required. Given a factorization of the sparse matrix held in out-of-core storage, we show how to compute such a subset eciently, by accessing only parts of the factors. When there are many inverse entries to compute, we need to guarantee that the overall computation scheme has reasonable… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
57
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 34 publications
(57 citation statements)
references
References 24 publications
0
57
0
Order By: Relevance
“…The formed linear equations in (4) given by the discretization of eq. (3) are solved by a multifrontal direct solver MUMPS 5.0.2 parallelized by OpenMP (Amestoy et al 2001(Amestoy et al , 2012, which could avoid uncertainties in pre-conditioning and convergence for iterative solutions, especially for low frequencies (Farquharson & Miensopust 2011;Oldenburg et al 2013). In this section, we will focus on the implementation of CFS-PML in details.…”
Section: Implementation Of Cfs-pmlmentioning
confidence: 99%
See 1 more Smart Citation
“…The formed linear equations in (4) given by the discretization of eq. (3) are solved by a multifrontal direct solver MUMPS 5.0.2 parallelized by OpenMP (Amestoy et al 2001(Amestoy et al , 2012, which could avoid uncertainties in pre-conditioning and convergence for iterative solutions, especially for low frequencies (Farquharson & Miensopust 2011;Oldenburg et al 2013). In this section, we will focus on the implementation of CFS-PML in details.…”
Section: Implementation Of Cfs-pmlmentioning
confidence: 99%
“…The numerical test is performed under the Dell Precision Tower 3620 (3.50 GHz CPU Intel@Xeon E3-1240 v5 family including 2 processors with up to 4 cores per processor, memory up to 16 G), which is suitable for OpenMP parallel programming. Note the MUMPS 5.0.2 used for factorizing the complex sparse matrix formed by SFD discretization is parallelized by OpenMP in an "out-of-core" environment (Amestoy et al 2001(Amestoy et al , 2012. When the "out-of-core" phase is activated and the complete matrix of factors is written to disk and will be read each time a solution phase is requested, therefore the memory requirement can be significantly reduced while not increasing much of the factorization time on a reasonably small number of processors.…”
Section: Numerical Analysismentioning
confidence: 99%
“…We assume that the cache size is adapted to the application, therefore ensuring that the execution time is linearly related to the frequency [38]: Exe(w i , f ) = w i f . When a task is scheduled to be re-executed at two different speeds f (1) and f (2) , we always account for both executions, even when the first execution is successful, and hence Exe(…”
Section: Tri-criteria Problemmentioning
confidence: 99%
“…Since the reliability increases with speed, we must have f ≥ f rel to match the reliability constraint. If task T i is re-executed (speeds f (1) and f (2) ), then the execution of T i is successful if and only if one of the attempts do not fail, so that the reliability of T i is (2) )), and this quantity should be at least equal to R i (f rel ).…”
Section: Tri-criteria Problemmentioning
confidence: 99%
See 1 more Smart Citation