In this article we investigate some computational aspects of GPU-accelerated matrix-vector multiplication where matrix is sparse. Particularly, we deal with sparse matrices appearing in modelling with Markovian queuing models. The model we use for research is a Markovian queuing model of a wireless device. This model describes the device's behavior during possible channel occupation by other devices.We study the efficiency of multiplication of a sparse matrix by a dense vector with the use of an appropriate, ready-to-use GPU-accelerated mathematical library, namely CUSP. For the CUSP library we discuss data structures and their impact on the CUDA platform for the fine-grained parallel architecture of the GPU. Our aim is to find the best format for storing a sparse matrix for GPU-computation (especially one associated with the Markovian model of a wireless device).We compare the time, the performance and the speed-up for the card NVIDIA Tesla C2050 (with ECC ON). For unstructured matrices (as our Markovian matrices), we observe speed-ups (in respect to CPU-only computations) of over 8 times.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.