2020
DOI: 10.48550/arxiv.2007.04457
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accelerating Multigrid-based Hierarchical Scientific Data Refactoring on GPUs

Abstract: Rapid growth in scientific data and a widening gap between computational speed and I/O bandwidth makes it increasingly infeasible to store and share all data produced by scientific simulations. Multigrid-based hierarchical data refactoring is a class of promising approaches to this problem. These approaches decompose data hierarchically; the decomposed components can then be selectively and intelligently stored or shared, based on their relative importance in the original data. Efficient data refactoring desig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 39 publications
0
4
0
Order By: Relevance
“…• OPT: Our GPU data refactoring, which uses our novel processing kernels (i.e., GPK, LPK, and IPK) proposed in our previous work [22].…”
Section: Evaluation Methodologymentioning
confidence: 99%
See 3 more Smart Citations
“…• OPT: Our GPU data refactoring, which uses our novel processing kernels (i.e., GPK, LPK, and IPK) proposed in our previous work [22].…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…In the state-of-the-art design [15], the computed coefficients are first copied to the workspace before they are used for computing corrections, which prohibits out-of-place computing unless additional memory space is used. Our previous work leverages kernel fusion [22] to merges the copy of the coefficients with the first mass-trans matrix multiplication, so that it enables us to use LPK out-of-place compute without a significant increase in memory footprint. However, one drawback with our previous design is the degraded memory access efficiency for GPK since accessing nodes in coarser grids leads to larger strided memory accesses.…”
Section: Heuristic Performance Auto Tuningmentioning
confidence: 99%
See 2 more Smart Citations