2007
DOI: 10.1007/978-3-540-75755-9_112
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing a Parallel Self-verified Method for Solving Linear Systems

Abstract: Abstract. Solvers for linear equation systems are commonly used in many different kinds of real applications, which deal with large matrices. Nevertheless, two key problems appear to limit the use of linear system solvers to a more extensive range of real applications: computing power and solution correctness. In a previous work, we proposed a method that employs high performance computing techniques together with verified computing techniques in order to eliminate the problems mentioned above. This paper pres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0
1

Year Published

2007
2007
2014
2014

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 2 publications
0
5
0
1
Order By: Relevance
“…This is specially important when aimed at high‐performance computing. Using midpoint–radius arithmetic and directed roundings, we were able to use the proper implementation of highly optimized libraries for the following high‐performance hardware: ScaLAPACK for clusters and Parallel Linear Algebra Software for Multicore Architectures for multicore . Further studies could investigate how to implement our verified solution for graphics processing units (GPUs) using Matrix Algebra on GPU and Multicore Architectures .…”
Section: Interval Arithmetic For High‐performance Computingmentioning
confidence: 99%
See 1 more Smart Citation
“…This is specially important when aimed at high‐performance computing. Using midpoint–radius arithmetic and directed roundings, we were able to use the proper implementation of highly optimized libraries for the following high‐performance hardware: ScaLAPACK for clusters and Parallel Linear Algebra Software for Multicore Architectures for multicore . Further studies could investigate how to implement our verified solution for graphics processing units (GPUs) using Matrix Algebra on GPU and Multicore Architectures .…”
Section: Interval Arithmetic For High‐performance Computingmentioning
confidence: 99%
“…In previous works , a verified algorithm for solving linear systems was parallelized for clusters of computers using the midpoint–radius arithmetic (Section 4 introduces more details on the choice of using midpoint–radius arithmetic) along with the numerical libraries Scalable Linear Algebra Package (ScaLAPACK) and Parallel Basic Linear Algebra Subprograms (PBLAS). Clusters of computers are considered as a good option to achieve better performance without using parallel programming models oriented to very expensive machines .…”
Section: Introductionmentioning
confidence: 99%
“…A parallel version of the self-verified method for solving linear systems was presented in [19,18]. In this paper we propose the following improvements aiming at a better performance: -Calculation of R using just floating-point operations; -Avoid the use of C-XSC elements that could slow down the execution; -Use of the fast and highly optimized libraries: BLAS and LAPACK in the first sequential version (for the parallel version PBLAS and SCALAPACK respectively); -Use of both interval arithmetics: infimum-supremum and midpoint-radius (as proposed by Rump [27] ); -Use of techniques to avoid the switching of rounding mode in infimum-supremum operations (proposed by Bohlender [2,3]).…”
Section: Parallel Approachmentioning
confidence: 99%
“…A parallel version of the self-verified method for solving linear systems was presented in [19,18]. In this research we propose improvements aiming at a better performance.…”
mentioning
confidence: 99%
“…Moreover, the major goal of this paper is to point out the advantages and the drawbacks of the parallelization of a self-verifying method for solving linear systems over distributed environments. It is important to mention that readers interested in the authors' previous publications on this subject should read [17,18].…”
mentioning
confidence: 99%