2011
DOI: 10.1109/ms.2010.134
|View full text |Cite
|
Sign up to set email alerts
|

Joint Forces: From Multithreaded Programming to GPU Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2011
2011
2014
2014

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 2 publications
0
9
0
Order By: Relevance
“…This obviously means that SPRINT can run without modification on a variety of cluster configurations and shared memory multi‐processor platforms. Moreover, using MPI allows SPRINT to tackle more than the embarrassingly parallel problems that accelerators such as GPUs are optimized for .…”
Section: Benchmarksmentioning
confidence: 99%
“…This obviously means that SPRINT can run without modification on a variety of cluster configurations and shared memory multi‐processor platforms. Moreover, using MPI allows SPRINT to tackle more than the embarrassingly parallel problems that accelerators such as GPUs are optimized for .…”
Section: Benchmarksmentioning
confidence: 99%
“…This obviously means that SPRINT can run without modification on a variety of cluster configurations and shared memory multi-processor platforms. Moreover, using MPI allows SPRINT to tackle more than the embarrassingly parallel problems that accelerators such as GPUs are optimized for [14].…”
Section: The Parallel Version: Pmaxtmentioning
confidence: 99%
“…Furthermore, parallel algorithms as a concern of programming paradigm, have as long of a tradition as the one of sequential ones [4] and, although future processor generations promise to come along with hundreds of cores per socket [5], an application can only benefit from this if it's designed for parallel execution.…”
Section: Introductionmentioning
confidence: 99%