2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID) 2020
DOI: 10.1109/ccgrid49817.2020.00-22
|View full text |Cite
|
Sign up to set email alerts
|

CUBE – Towards an Optimal Scaling of Cosmological N-body Simulations

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Fast Fourier transform (FFT) is essential in many scientific and engineering applications, including large-scale simulations [6], time series [30], waveform analysis [4], electronic structure calculations [15], and image processing [8]. Due to its wide range of applications, improving the performance of FFT is of great significance.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Fast Fourier transform (FFT) is essential in many scientific and engineering applications, including large-scale simulations [6], time series [30], waveform analysis [4], electronic structure calculations [15], and image processing [8]. Due to its wide range of applications, improving the performance of FFT is of great significance.…”
Section: Introductionmentioning
confidence: 99%
“…And a noticeable number of scientific applications use half-precision FFT. The gravitational wave data analysis software pyCBC [4] and the cosmological large-scale structure N-body code CUBE [6,32] use half precision to speed up the long-length FFT calculation. Medical image restoration applications [8,19] use lower precision or mixed precision to speed up the computation of batched 2D FFT.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Guillet & Teyssier 2011), allows focusing the effort on a specific region of the computational domain, but requires a two-way flow of information between small and large scales. More recently, leading computational cosmology groups have been developing sophisticated schemes to leverage parallel and hybrid computing architectures (Gonnet et al 2013;Theuns et al 2015;Aubert et al 2015;Ocvirk et al 2016;Potter et al 2017;Yu et al 2018;Garrison et al 2019;Cheng et al 2020).…”
Section: Introductionmentioning
confidence: 99%
“…To reduce the computational cost of generating simulation samples various parallel, distributed-memory N -body solvers have been developed sometimes with GPU-acceleration (Springel 2005 ;Yu, Pen & Wang 2018, Cheng et al 2020. Relying solely on massively parallel computing to tackle next-generation observational data sets appears impractical given our time, memory, and energy resources since thousands of simulations are needed to produce sufficiently accurate cosmological parameter constraints (see for instance, Blot et al 2016 ).…”
mentioning
confidence: 99%