2012
DOI: 10.1109/tc.2011.112
|View full text |Cite
|
Sign up to set email alerts
|

vCUDA: GPU-Accelerated High-Performance Computing in Virtual Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
117
0
4

Year Published

2012
2012
2019
2019

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 211 publications
(121 citation statements)
references
References 20 publications
0
117
0
4
Order By: Relevance
“…GridCuda [21] supports CUDA 3.2, although it is not publicly available. vCUDA [22] supports the CUDA 3.2 and implements an unspecified subset of the CUDA runtime API. The communication protocol between the node that executes the application and the remote GPU has a considerable overhead, because of the costs incurred during encoding and decoding, which results in a noticeable drop of overall performance.…”
Section: Related Workmentioning
confidence: 99%
“…GridCuda [21] supports CUDA 3.2, although it is not publicly available. vCUDA [22] supports the CUDA 3.2 and implements an unspecified subset of the CUDA runtime API. The communication protocol between the node that executes the application and the remote GPU has a considerable overhead, because of the costs incurred during encoding and decoding, which results in a noticeable drop of overall performance.…”
Section: Related Workmentioning
confidence: 99%
“…For GPU virtualisation, the multiplexing work is performed in user space, owing to the lack of standard interfaces at the hardware level and driver layer. Precisely, the multiplexer and scheduler are put on the top of CUDA runtime or driver APIs (Gupta et al, 2009;Shi et al, 2012;Giunta et al, 2010). In our case, the coprovisor can perform multiplexing directly at the accelerator driver layer in Dom0.…”
Section: Fpga Virtualisation 221 Coprovisormentioning
confidence: 99%
“…FPGAs have been found to outperform GPUs in many specific applications (Che et al, 2008;Cope et al, 2005). Since 2007, many researchers (Dowty and Sugerman, 2009;Gupta et al, 2009;Shi et al, 2012;Giunta et al, 2010;Ravi et al, 2011;Lagar-Cavilla et al, 2007) have been focusing on making GPUs a shared resource within a virtualised environment, which would allow for adding GPUs to the infrastructure level of cloud computing. But the idea of adding FPGA accelerators to cloud computing (El-Araby et al, 2008;Gonzalez et al, 2012;Huang et al, 2010;Huang and Hsiung, 2013;Lübbers, 2010;Sabeghi and Bertels, 2009;Jain et al, 2014;Byma et al, 2014;Wang et al, 2013) still stays at an exploration stage.…”
Section: Introductionmentioning
confidence: 99%
“…One example is vCUDA [16]. In this approach, all API function calls on the target OS need to be redirected to the host OS when migration happens.…”
Section: Related Workmentioning
confidence: 99%