2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid) 2016
DOI: 10.1109/ccgrid.2016.91
|View full text |Cite
|
Sign up to set email alerts
|

OS-Based NUMA Optimization: Tackling the Case of Truly Multi-thread Applications with Non-partitioned Virtual Page Accesses

Abstract: A common approach to improve memory access in NUMA machines exploits operating system (OS) page protection mechanisms to induce faults to determine which pages are accessed by what thread, so as to move the thread and its working-set of pages to the same NUMA node. However, existing proposals do not fully fit the requirements of truly multi-thread applications with non-partitioned accesses to virtual pages. In fact, these proposals exploit (induced) faults on a same page-table for all the threads of a same pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
17
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 25 publications
0
17
0
Order By: Relevance
“…Due to resource contention and the asymmetric access time for cores to access memory on remote nodes and the local node, researchers have developed various strategies for dealing with these issues to support NUMA systems. Some focus on the memory placement and data mapping policies that try to allocate memory or migrate pages for a process on the node for improving locality or balancing of memory access. To minimize remote memory access, memory access behaviors are detected, and pages are placed on the node with the most accesses to them.…”
Section: Technology Background and Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Due to resource contention and the asymmetric access time for cores to access memory on remote nodes and the local node, researchers have developed various strategies for dealing with these issues to support NUMA systems. Some focus on the memory placement and data mapping policies that try to allocate memory or migrate pages for a process on the node for improving locality or balancing of memory access. To minimize remote memory access, memory access behaviors are detected, and pages are placed on the node with the most accesses to them.…”
Section: Technology Background and Related Workmentioning
confidence: 99%
“…To minimize remote memory access, memory access behaviors are detected, and pages are placed on the node with the most accesses to them. Some focus on sharing‐aware mapping, which maps threads accessing shared data to cores close to one another in the memory hierarchy and maps the data they accessed to their NUMA nodes. Some detect the memory access pattern for thread and data mapping on the hardware level, whereas some studies gather information from page faults or using a memory tracer tool.…”
Section: Technology Background and Related Workmentioning
confidence: 99%
See 3 more Smart Citations