Proceedings of the Eleventh European Conference on Computer Systems 2016
DOI: 10.1145/2901318.2901344
|View full text |Cite
|
Sign up to set email alerts
|

Data tiering in heterogeneous memory systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
85
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 156 publications
(85 citation statements)
references
References 30 publications
0
85
0
Order By: Relevance
“…Managing hybrid memory or storage Many generic systems manage hybrid memory and storage. X-mem automatically places application data based on application execution patterns [19]. Thermostat transparently migrates memory pages between DRAM and NVM while considering page granularity and performance [3].…”
Section: Related Workmentioning
confidence: 99%
“…Managing hybrid memory or storage Many generic systems manage hybrid memory and storage. X-mem automatically places application data based on application execution patterns [19]. Thermostat transparently migrates memory pages between DRAM and NVM while considering page granularity and performance [3].…”
Section: Related Workmentioning
confidence: 99%
“…The results show that although the average response time of I/O accesses is decreased, the migration algorithm needs to be adapted as the migration overhead is considerably smaller between DIMM and SSDs than between SSDs and HDDs. A different approach is to determine automatically whether a given memory area can benefit from DRAM characteristics by using a profiling tool [18]. Furthermore, NOVA is a file system aiming to increase the performance and offering strong consistency guarantees at the same time through additional metadata structures held in DRAM, accelerating the lookup.…”
Section: Acceleratorsmentioning
confidence: 99%
“…Other researchers have proposed to mitigate long SCM latencies by using conventional planar DRAM DIMMs for hardware-managed caches [59], OS-based page migration [29] and application-assisted data placement [20]. Applying these designs in the context of server workloads will expose the lack of internal parallelism in planar DRAM devices [72], leading to excess request queuing and therefore inflated latencies.…”
Section: Related Workmentioning
confidence: 99%
“…DRAM and SCM flat integration. While we considered a twolevel hierarchy with a hardware-managed DRAM cache as our baseline ( §3), a number of prior proposals consider an alternative memory system organization: flat integration of SCM and DRAM on a shared memory bus [1,20,29]. In these proposals, software is responsible for placing the data on the heterogeneous DIMMs, relying on heuristics to optimize for performance [1,20] or energy efficiency [29].…”
Section: Related Workmentioning
confidence: 99%