Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture 2017
DOI: 10.1145/3123939.3123975
|View full text |Cite
|
Sign up to set email alerts
|

Mosaic

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 80 publications
(8 citation statements)
references
References 73 publications
0
8
0
Order By: Relevance
“…We also added TLB and MMU support to simulate unified memory. A two-level TLB design is used where each SM has its private L1 TLB and all SMs share an L2 TLB as in prior work [8,80,81,9,91]. A TLB miss triggers a page table walk upon a page fault; up to 64 concurrent page walkers are supported.…”
Section: Methodsmentioning
confidence: 99%
“…We also added TLB and MMU support to simulate unified memory. A two-level TLB design is used where each SM has its private L1 TLB and all SMs share an L2 TLB as in prior work [8,80,81,9,91]. A TLB miss triggers a page table walk upon a page fault; up to 64 concurrent page walkers are supported.…”
Section: Methodsmentioning
confidence: 99%
“…GPU Memory Management. Many memory management optimizations have been proposed to pre-allocate most GPU memory and then manage the memory themselves, including paging [4], replacement caching [52] and memory pool [56]. Mosaic [4] provided application-transparent support for multiple page sizes to page-in and page-out.…”
Section: Related Workmentioning
confidence: 99%
“…Many memory management optimizations have been proposed to pre-allocate most GPU memory and then manage the memory themselves, including paging [4], replacement caching [52] and memory pool [56]. Mosaic [4] provided application-transparent support for multiple page sizes to page-in and page-out. MultiQx-GPU [52] designed a cost-driven replacement policy for efficient executions of concurrent queries in GPU databases.…”
Section: Related Workmentioning
confidence: 99%
“…This gap ( 7 , the green part in Figure 4) approximates the potential benefit of a huge page split. If the potential benefit is large, Memtis chooses huge pages with high access skew in their subpages as split candidates ( 8 ). Then, it splinters the huge pages in the background and places each split subpage into the appropriate memory tier by referring to subpage access information maintained in the huge page ( 9 ).…”
Section: Memtis Design Overviewmentioning
confidence: 99%