2018
DOI: 10.14569/ijacsa.2018.090217
|View full text |Cite
|
Sign up to set email alerts
|

Toward Exascale Computing Systems: An Energy Efficient Massive Parallel Computational Model

Abstract: The emerging Exascale supercomputing system expected till 2020 will unravel many scientific mysteries. This extreme computing system will achieve a thousand-fold increase in computing power compared to the current petascale computing system. The forthcoming system will assist system designers and development communities in navigating from traditional homogeneous to the heterogeneous systems that will be incorporated into powerful accelerated GPU devices beside traditional CPUs. For achieving ExaFlops (10 18 ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…To achieve massive parallelism in parallel computing, a Tri-Hierarchy hybrid MOC (MPI + OpenMP + CUDA) model was proposed in 2018 [20]. This model helped in achieving massive performance through monolithic parallelism when we compute any HPC application over a large-scale cluster system having multiple nodes and a number of GPUs > 2.…”
Section: F Moc (Mpi + Openmp + Cuda)mentioning
confidence: 99%
See 4 more Smart Citations
“…To achieve massive parallelism in parallel computing, a Tri-Hierarchy hybrid MOC (MPI + OpenMP + CUDA) model was proposed in 2018 [20]. This model helped in achieving massive performance through monolithic parallelism when we compute any HPC application over a large-scale cluster system having multiple nodes and a number of GPUs > 2.…”
Section: F Moc (Mpi + Openmp + Cuda)mentioning
confidence: 99%
“…The overall performance of HPC system is considered as the most fundamental and essential metric in massive parallel programming which is measured in a total number of achieved floating-point operations per second (Flops) [32]. The total number of achieved Flops (F T ) can be calculated by dividing the achieved Flops calculated at the peak performance of the system (F pp ) by the total execution time (T Exec ) [20], which can be written as follows:…”
Section: ) Performance Measurementmentioning
confidence: 99%
See 3 more Smart Citations