2014 30th Symposium on Mass Storage Systems and Technologies (MSST) 2014
DOI: 10.1109/msst.2014.6855538
|View full text |Cite
|
Sign up to set email alerts
|

Jericho: Achieving scalability through optimal data placement on multicore systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Indirect LLC Isolation: FRIMICS does not explicitly isolate CPU caches (specifically the LLC) and memory bandwidth; however, it mitigates the interference in those resources by placing applications to a single NUMA node, when there are enough resources available, or uses neighboring NUMA nodes when applications are larger. Therefore, it indirectly reduces interference in percore caches and directly traffic across different NUMA nodes, which can significantly improve I/O throughput up to 2× [30].…”
Section: Frimicsmentioning
confidence: 99%
“…Indirect LLC Isolation: FRIMICS does not explicitly isolate CPU caches (specifically the LLC) and memory bandwidth; however, it mitigates the interference in those resources by placing applications to a single NUMA node, when there are enough resources available, or uses neighboring NUMA nodes when applications are larger. Therefore, it indirectly reduces interference in percore caches and directly traffic across different NUMA nodes, which can significantly improve I/O throughput up to 2× [30].…”
Section: Frimicsmentioning
confidence: 99%
“…In [16] using Vanguard, we show the improvement in memory throughput when applying task placement, where we minimize NUMA effects in the system.…”
Section: Journaling Filesystemmentioning
confidence: 99%
“…pCache [16] provides a partitioned DRAM I/O cache to reduce memory contention due to global policies and to allow NUMA placement. In typical servers today all workloads use the shared Linux page cache.…”
Section: Dram I/o Cachementioning
confidence: 99%