2016 IEEE 22nd International Conference on Parallel and Distributed Systems (ICPADS) 2016
DOI: 10.1109/icpads.2016.0120
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Approach for the Predicting Performance of SpMV on GPU

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(17 citation statements)
references
References 18 publications
0
15
0
Order By: Relevance
“…Recent studies such as [18] have explored the concept of using ML techniques such as multi-layer perceptron (MLP) and support vector regression (SVR) to model SpMV performance. We draw inspiration from these studies and extend them by using an ensemble of MLP for the performance modeling task.…”
Section: Results: Spmv Performance Modelingmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent studies such as [18] have explored the concept of using ML techniques such as multi-layer perceptron (MLP) and support vector regression (SVR) to model SpMV performance. We draw inspiration from these studies and extend them by using an ensemble of MLP for the performance modeling task.…”
Section: Results: Spmv Performance Modelingmentioning
confidence: 99%
“…Benatia et al [18] proposed to use multi-layer perceptron (MLP) and support vector regression (SVR) to predict the performance number of a SpMV operation. On average, it achieves low prediction error of 7% to 14% on a dataset of 1800 matrices.…”
Section: Related Workmentioning
confidence: 99%
“…To address this issue, we used a machine learning approach to build performance models for the SpMV kernel under different sparse formats both on the CPU side and the GPU side. This approach was presented in our previous work (Benatia et al, 2016) for the sparse formats COO, CSR, ELL, and HYB on the GPU side. We found that it is straightforward to use the same approach to train performance models for the SpMV kernel using the COO and CSR formats on the CPU side.…”
Section: Sparse Matrix Partitioning For Spmv On Cpu-gpu Heterogenementioning
confidence: 99%
“…The off-line learning stage consists in training the performance models of the SpMV kernel under different sparse formats on the available processing units (CPU and GPU). The details of this stage have been previously presented in our work (Benatia et al, 2016) of which we summarize the main points in the following.…”
Section: Sparse Matrix Partitioning For Spmv On Cpu-gpu Heterogenementioning
confidence: 99%
See 1 more Smart Citation