Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2021
DOI: 10.1145/3437801.3441612
|View full text |Cite
|
Sign up to set email alerts
|

Are dynamic memory managers on GPUs slow?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…In addition, there is also a strong movement of developing system software on GPUs, such as database management systems. [27,37,[43][44][45] Dynamic memory allocation on GPUs was first introduced about ten years ago by NVIDIA and many other solutions have been proposed since then [42]. Many GPU-based applications benefit from dynamic memory allocation such as graph analytics [9,41], data analytics [5,35], and databases [4,19].…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…In addition, there is also a strong movement of developing system software on GPUs, such as database management systems. [27,37,[43][44][45] Dynamic memory allocation on GPUs was first introduced about ten years ago by NVIDIA and many other solutions have been proposed since then [42]. Many GPU-based applications benefit from dynamic memory allocation such as graph analytics [9,41], data analytics [5,35], and databases [4,19].…”
Section: Introductionmentioning
confidence: 99%
“…There are unique challenges in developing system software on massively parallel hardware, mostly imposed by the need to support a large number of parallel threads efficiently and the architectural complexity of the GPU hardware. Dynamic memory allocators in particular face challenges such as thread contention and synchronization overhead, and multiple studies [42] have proposed solutions to address these challenges. Similar to traditional memory allocators, such solutions utilize a shared data structure to keep track of available memory units [42].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…This necessitates a more sophisticated task-tohardware mapping for GPU than that for CPU. • Input Awareness: Dynamic memory allocation is expensive in GPU [112] and we can avoid it if we can estimate the worst-case memory usage. This can be done by using some meta information such as the maximum degree of the input graph.…”
Section: Introductionmentioning
confidence: 99%