2010
DOI: 10.1007/978-3-642-13217-9_13
|View full text |Cite
|
Sign up to set email alerts
|

OMPCUDA : OpenMP Execution Framework for CUDA Based on Omni OpenMP Compiler

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
2

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 3 publications
0
5
0
2
Order By: Relevance
“…Salah satu contohnya yaitu OpenMP yang merupakan API dengan dukungan pada multi-platform serta berbagi memori multiprocessing dengan pemrograman C, C + +, dan Fortran. Dengan memanfaatkan OpenMP, maka dapat dilakukan proses compile secara paralel [7].…”
Section: Pendahuluanunclassified
“…Salah satu contohnya yaitu OpenMP yang merupakan API dengan dukungan pada multi-platform serta berbagi memori multiprocessing dengan pemrograman C, C + +, dan Fortran. Dengan memanfaatkan OpenMP, maka dapat dilakukan proses compile secara paralel [7].…”
Section: Pendahuluanunclassified
“…Arguments can be declared as IN, OUT, or INOUT to avoid useless transfers, but no piece of data can be kept in the gpu memory between two kernel launches. There have also been several initiatives to automate transformations for OpenMP annotated source code to cuda [20,21]. The gpu programming model and the host accelerator paradigm greatly restrict the potential of this approach, since OpenMP is designed for shared memory computer.…”
Section: Semi-automatic Approachmentioning
confidence: 99%
“…Our proposal is independent of the parallelizing scheme involved, and is applicable to systems that transform OpenMP in cuda or OpenCL like OMPCuda [21] or OpenMP to gpu [20]. It's also relevant for directives-based compiler, such as Jcuda and hi cuda [14].…”
Section: Comparison With Respect To a Fully Dynamic Approachmentioning
confidence: 99%
“…Les possibilités de spé-cialisation et d'optimisation sont accrues, mais avec le même revers que les extensions propriétaires d'OPENCL, c'est à dire un code spécifique à une architecture. Des initiatives ont également été proposées pour la transformation de code annoté pour OPENMP vers CUDA [15,16]. Le modèle de programmation des GPUs et le paradigme hôte-accélérateur a conduit rapidement aux limites de l'approche, OPENMP étant conçu pour les machines à mémoire partagée.…”
Section: Approche Semi-automatiséeunclassified