2022
DOI: 10.1109/tpds.2021.3090328
|View full text |Cite
|
Sign up to set email alerts
|

A Pattern-Based SpGEMM Library for Multi-Core and Many-Core Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 48 publications
0
7
0
Order By: Relevance
“…Previous research (Xie et al, 2021) trains a model using matrix features and thumbnails to predict the SpGEMM algorithm with the best performance on input matrices which they called MatNet, but it has several limitations and drawbacks. (a) The model utilized matrix thumbnails as one of its features for training and prediction.…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous research (Xie et al, 2021) trains a model using matrix features and thumbnails to predict the SpGEMM algorithm with the best performance on input matrices which they called MatNet, but it has several limitations and drawbacks. (a) The model utilized matrix thumbnails as one of its features for training and prediction.…”
Section: Motivationmentioning
confidence: 99%
“…For results accumulation, there are sparse accumulators and dense accumulators (Patwary et al, 2015). Although there have been many algorithms to solve the above issues, the experiment (Xie et al, 2021) has proved that there is no algorithm that can achieve the best performance of SpGEMM over all matrices.…”
Section: Introductionmentioning
confidence: 99%
“…They also have proposed a threshold to switch between a dense and a sparse representation of each tile and designed a tiled algorithm for SpMV [37]. Finally, Xie et al presented a novel AI-based approach for SpGEMM [44]. Although the algorithms on which they rely all use the Gustavson method, they investigate different matrix storage formats and algorithms and train a custom deep-learning network which chooses the best solution for given input matrices and target architecture.…”
Section: Related Workmentioning
confidence: 99%
“…Compared to sparse matrix-vector multiplication (SpMV), the more challenging SpGEMM has been covered to a lesser extent in the literature so far -as pointed out by Winter et al [29]. Recently, however, the optimization of static SpGEMM algorithms for specific parallel architectures is on the rise, e. g., for multithreaded CPUs [19,13] GPUs [29], CPU/GPU combinations [11,30], and other accelerators [19]. A possible reason for this spike could be the use of SpGEMM in deep learning with sparse DNNs, as described in Ref.…”
Section: Related Workmentioning
confidence: 99%