Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms 2018
DOI: 10.1137/1.9781611975031.67
|View full text |Cite
|
Sign up to set email alerts
|

Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor

Abstract: In the past few years, successive improvements of the asymptotic complexity of square matrix multiplication have been obtained by developing novel methods to analyze the powers of the Coppersmith-Winograd tensor, a basic construction introduced thirty years ago. In this paper we show how to generalize this approach to make progress on the complexity of rectangular matrix multiplication as well, by developing a framework to analyze powers of tensors in an asymmetric way. By applying this methodology to the four… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 87 publications
(38 citation statements)
references
References 23 publications
0
36
0
Order By: Relevance
“…This algorithm is designed to reduce the number of multiplication operations, which is not a direct concern for distributed algorithms, in which the main cost is due to communication. Le Gall [14] improved this result for some range of sparsity by improving general rectangular matrix multiplication, for which a further improvement was recently given by Le Gall and Urrutia [17]. Kaplan et al [20] give an algorithm for multiplying sparse rectangular matrices, and Amossen and Pagh [3] give a fast algorithm for the case of sparse square matrices for which the product is also sparse.…”
Section: Related Workmentioning
confidence: 97%
See 1 more Smart Citation
“…This algorithm is designed to reduce the number of multiplication operations, which is not a direct concern for distributed algorithms, in which the main cost is due to communication. Le Gall [14] improved this result for some range of sparsity by improving general rectangular matrix multiplication, for which a further improvement was recently given by Le Gall and Urrutia [17]. Kaplan et al [20] give an algorithm for multiplying sparse rectangular matrices, and Amossen and Pagh [3] give a fast algorithm for the case of sparse square matrices for which the product is also sparse.…”
Section: Related Workmentioning
confidence: 97%
“…In many cases, multiplication is required to be carried out for sparse matrices, and this need has been generating much effort in designing algorithms that are faster given sparse inputs, both in sequential (e.g., [3,14,17,20,32]) and parallel (e.g., [4-8, 22, 23, 27]) settings.…”
Section: Introductionmentioning
confidence: 99%
“…Case 2: the input graph is directed. The total preprocessing time is We substitute ω(0.7) ≤ 2.154399 and ω(0.75) ≤ 2.187543 [11], and obtain: Therefore, given a directed graph G = (V, E), there is a DSO with Õ n max{2a+ω(1−a),3−a} M 2 = Õ(n 2.723277 M 2 ).…”
Section: :7mentioning
confidence: 99%
“…We could sharpen our results using the so-called exponent ω 2 of rectangular matrix multiplication in size (s, s) × (s, s 2 ). We can of course take ω 2 ≤ ω + 1 ≤ 3.373, but the better result ω 2 ≤ 3.252 is known [27]. We will not use these refinments in this paper.…”
Section: Polynomial and Matrix Arithmeticmentioning
confidence: 99%