2020
DOI: 10.1145/3428226
|View full text |Cite
|
Sign up to set email alerts
|

A sparse iteration space transformation framework for sparse tensor algebra

Abstract: We address the problem of optimizing sparse tensor algebra in a compiler and show how to define standard loop transformations---split, collapse, and reorder---on sparse iteration spaces. The key idea is to track the transformation functions that map the original iteration space to derived iteration spaces. These functions are needed by the code generator to emit code that maps coordinates between iteration spaces at runtime, since the coordinates in the sparse data structures remain in the original iteration s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
35
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 42 publications
(35 citation statements)
references
References 52 publications
0
35
0
Order By: Relevance
“…How to effectively exploit sparse vectors and matrices has been well-studied in the past for linear algebra problems [1,17,19,22,24,23,26,28,33,51,59,81,50,62,76,88]. The growing popularity of deep learning and big data has sparked a similar interest in studying how machine learning kernels can take advantage of sparse tensors [15,40,41,43,67,70,75].…”
Section: Sparse Tensors and Storage Formatsmentioning
confidence: 99%
See 1 more Smart Citation
“…How to effectively exploit sparse vectors and matrices has been well-studied in the past for linear algebra problems [1,17,19,22,24,23,26,28,33,51,59,81,50,62,76,88]. The growing popularity of deep learning and big data has sparked a similar interest in studying how machine learning kernels can take advantage of sparse tensors [15,40,41,43,67,70,75].…”
Section: Sparse Tensors and Storage Formatsmentioning
confidence: 99%
“…For this, TACO provides the co-iteration formulation that can be used to generate code to co-iterate over any number of sparse and dense tensors, which is necessary for general kernel fusion. Kjolstad et al [41] and Senanayake et al [67] also extended the TACO compiler with a scheduling language that lets users (or automatic systems) organize the iteration over tensor expressions, which lets them tile, control fusion/fission, statically load-balance, and generate GPU code for sparse tensor algebra kernels. Sparse tensor support in MLIR borrows heavily from the foundation laid by TACO.…”
Section: Sparse Compilersmentioning
confidence: 99%
“…Our extension is open-source and publicly available at https://github.com/tensorcompiler/taco/tree/array_algebra. Like the TACO compiler, our sparse array compiler takes an algorithm description, a format language [Chou et al 2018], and a scheduling language [Senanayake et al 2020].…”
Section: Overviewmentioning
confidence: 99%
“…Most directly related to our work is the body of work on the Sparse Tensor Algebra Compiler (TACO) [Chou et al 2018Kjolstad et al 2019;Kjolstad et al 2017;Senanayake et al 2020]. Our work shows how to generalize the compilation theory behind TACO [Kjùlstad 2020] to the much broader class of array programs, by allowing any function to be applied to sparse arrays with any fill value.…”
Section: Sparse Array Language Compilationmentioning
confidence: 99%
“…Our work on graph optimization builds on substantial efforts for optimization of computational graphs of tensor operations. Tensor contraction can be optimized via parallelization [44], [27], [26], [52], efficient transposition [54], blocking [12], [32], [22], [46], exploiting symmetry [18], [52], [51], and sparsity [28], [42], [26], [35], [42], [50]. For complicated tensor graphs, specialized compilers like XLA [55] and TVM [10] rewrite the computational graph to optimize program execution and memory allocation on dedicated hardware.…”
Section: Previous Workmentioning
confidence: 99%