2009 International Conference on Industrial and Information Systems (ICIIS) 2009
DOI: 10.1109/iciinfs.2009.5429842
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating high performance applications with CUDA and MPI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(20 citation statements)
references
References 2 publications
0
18
0
Order By: Relevance
“…This because with the availability of the needed hardware testbed, the communication plugin component will evolve in order to support the Infiniband network because it is expected that the higher bandwidth allows remote GPU virtualization frameworks to experience communication performances similar to the PCIe on the path between the local GPGPU and the remote GPU resource [32], [30]. Due to the unavailability of real world applications fitting the available ARM cluster, GVirtuS has been tested using an ad hoc distributed memory matrix multiplication software [14] and accelerated CUDA kernels working on local or x86 remoted high-end GPU device [18].…”
Section: Discussionmentioning
confidence: 99%
“…This because with the availability of the needed hardware testbed, the communication plugin component will evolve in order to support the Infiniband network because it is expected that the higher bandwidth allows remote GPU virtualization frameworks to experience communication performances similar to the PCIe on the path between the local GPGPU and the remote GPU resource [32], [30]. Due to the unavailability of real world applications fitting the available ARM cluster, GVirtuS has been tested using an ad hoc distributed memory matrix multiplication software [14] and accelerated CUDA kernels working on local or x86 remoted high-end GPU device [18].…”
Section: Discussionmentioning
confidence: 99%
“…Message Passing Interface (MPI) [2] has been the choice of high performance computing for more than a decade and it has proven its capability in delivering higher performance in parallel applications. CUDA and MPI use different programming approaches but both of them depend on the inherent parallelism of the application to be effective.…”
Section: Comparison Study Of Parallel Computing With Alu and Gpu (Cudmentioning
confidence: 99%
“…Support standard C language programming, which could support other high-level languages like Fortran, Java and Python. Two kinds of programming interfaces provided by CUDA are devicelevel programming interface and language integrated programming interface [5] .…”
Section: Cuda Gpu Parallel Architecturementioning
confidence: 99%