Proceedings of the 2015 ACM International Symposium on Symbolic and Algebraic Computation 2015
DOI: 10.1145/2755996.2756684
|View full text |Cite
|
Sign up to set email alerts
|

Exact Linear Algebra Algorithmic

Abstract: Exact linear algebra is a core component of many symbolic and algebraic computations, as it often delivers competitive theoretical complexities and also better harnesses the efficiency of modern computing infrastructures. In this tutorial we will present an overview on the recent advances in exact linear algebra algorithmic and implementation techniques, and highlight the few key ideas that have proven successful in their design. As an illustration, we will study in more details the computation of some matrix … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 28 publications
(30 reference statements)
0
2
0
Order By: Relevance
“…Furthermore, using Strassen's subcubic algorithm [Strassen 1969] and a stronger bound on m i one can reach even better performance [Dumas et al 2008;FFLAS-FFPACK-Team 2016]. This approach provides the best performance per bit for modular matrix multiplication [Pernet 2015] and thus we rely on it to build our RNS conversion algorithms from Section 3.…”
Section: Methodsmentioning
confidence: 99%
“…Furthermore, using Strassen's subcubic algorithm [Strassen 1969] and a stronger bound on m i one can reach even better performance [Dumas et al 2008;FFLAS-FFPACK-Team 2016]. This approach provides the best performance per bit for modular matrix multiplication [Pernet 2015] and thus we rely on it to build our RNS conversion algorithms from Section 3.…”
Section: Methodsmentioning
confidence: 99%
“…Indeed, the floating-point units and memory subsytems of workstations processors are designed to achieve maximum floating-point performance on essential numerical kernels, such as the vector product around which all linear algebra can be built. At the time of writing this article, it remains true that a high-end processor can achieve more floating-point operations than integer operations per seconds [2]. This is mainly due to the wide vector units (such as Intel's AVX extensions) not fully supporting 64-bit integer multiplication.…”
Section: Introductionmentioning
confidence: 99%