2011 19th International Euromicro Conference on Parallel, Distributed and Network-Based Processing 2011
DOI: 10.1109/pdp.2011.42
|View full text |Cite
|
Sign up to set email alerts
|

Particle-in-Cell Algorithms for Plasma Simulations on Heterogeneous Architectures

Abstract: During the last two decades, High-Performance Computing (HPC) has grown rapidly in performance by improving single-core processors at the cost of a similar growth in power consumption. The single-core processor improvement has led many scientists to exploit mainly the process level parallelism in their codes. However, the performance of HPC systems is becoming increasingly limited by power consumption and power density, which have become a primary concern for the design of new computer systems. As a result, ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…Finally, the authors of [16] achieve parallelism by dividing particles among threads according to their positions, on a shared memory machine, while taking advantage of cache reusability. A hybrid approach is presented in [17], in which the author uses MPI for communication between processes and OpenMP to parallelize the loops inside the processes. This way, the implementation takes advantage of the fact that the processing nodes have a multi-core architecture.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, the authors of [16] achieve parallelism by dividing particles among threads according to their positions, on a shared memory machine, while taking advantage of cache reusability. A hybrid approach is presented in [17], in which the author uses MPI for communication between processes and OpenMP to parallelize the loops inside the processes. This way, the implementation takes advantage of the fact that the processing nodes have a multi-core architecture.…”
Section: Introductionmentioning
confidence: 99%
“…• Distributed Memory: Parallel processes do not share the memory space, so the only efficient way to move data from the address space of one process to that of another space is with a message passing (Saez et al, 2011). In the case of this thesis, we consider a mesh-based numerical method.…”
Section: Hpc Parallelizationmentioning
confidence: 99%
“…• Shared Memory: Parallel processes share the memory space, so several processes can operate on the same data (Saez et al, 2011). The parallelization is done splitting a loop into chunks, each chunks being operated by different threads, running on different cores.…”
Section: Hpc Parallelizationmentioning
confidence: 99%