Proceedings of the 2nd Workshop on Parallel Programming for Analytics Applications 2015
DOI: 10.1145/2726935.2726941
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing dataset dependence for sparse matrix-vector multiplication on GPUs

Abstract: Sparse matrix-vector multiplication (SpMV) is a widely used kernel in scientific applications as well as data analytics. Many GPU implementations of SpMV have been proposed, proposing different sparse matrix representations.However, no sparse matrix representation is consistently superior, and the best representation varies for sparse matrices with different sparsity patterns. In this paper we study four popular sparse representations implemented in the NVIDIA cuSPARSE library: CSR, ELL, COO and a hybrid ELL-C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…We observed that while for COO and to a more limited extent CSR the higher the nnz frac the higher the GF/s on average, this trend is reverted for ELL and the hybrid schemes: the lower the nnz frac the better the average GF/s. Note that there are numerous outliers to these trends: while previous results showed that, on a restricted dataset, nnz frac was a possible predictor of the best representation [23] the extensive study we conduct in this paper showed the requirement to consider additional features to properly determine the best representation in general.…”
Section: Impact Of Ell and Csrmentioning
confidence: 71%
See 1 more Smart Citation
“…We observed that while for COO and to a more limited extent CSR the higher the nnz frac the higher the GF/s on average, this trend is reverted for ELL and the hybrid schemes: the lower the nnz frac the better the average GF/s. Note that there are numerous outliers to these trends: while previous results showed that, on a restricted dataset, nnz frac was a possible predictor of the best representation [23] the extensive study we conduct in this paper showed the requirement to consider additional features to properly determine the best representation in general.…”
Section: Impact Of Ell and Csrmentioning
confidence: 71%
“…This paper follows our previous work [23] and raises the question of how to choose between these representations in an automated, portable and systematic way.…”
Section: Related Workmentioning
confidence: 86%
“…In addition to maintaining comparable performance in the remaining two datasets, our design is also flexible enough to provide distances which require the semiring modifications outlined in subsection 4.2 while using less memory. As mentioned in section 2, it is not uncommon to see different sparse implementations performing better on some datasets than others [45] and the flexibility of our implementation, as well as our well-defined set of rules for supporting a wide array of distances, will allow us to continue optimizing our execution strategies to support patterns that we find frequently occurring across different sparse datasets.…”
Section: Runtime Performancementioning
confidence: 92%
“…In high performance computing environments, these solutions are designed around both hardware and software constraints [10,[22][23][24]28], often making use of specialized hardware capabilities and optimizing for specific sparsity patterns, an unfortunate side-effect that can reduce their potential for reuse. What complicates this further are the number of different optimized variants of sparse matrix multiplication available in open source libraries, each using different concurrency patterns and available memory to provide speedups based on either supported sparse formats or the assumed density of either the inputs or the outputs [33,45].…”
Section: Related Work 21 Sparse Matrix Multiplicationmentioning
confidence: 99%
“…However, there exists some knowledge about which given format is more suitable for a given matrix. For example, Vázquez et al ( 2011 ) and Sedaghati et al ( 2015 ) suggest that the density of a matrix is a guiding factor for the selection of a particular data structure. Furthermore, as shown by Vázquez et al ( 2011 ), the variability of row lengths can be a relevant criterion in the selection of data formats.…”
Section: Introductionmentioning
confidence: 99%