Emerging next generation memories, NVRAMs, such as Phase-change RAM (PRAM), Ferroelectric RAM (FRAM), and Magnetic RAM (MRAM) are rapidly becoming promising candidates for large scale main memory because of their high density and low power consumption. Many researchers have attempted to construct a main memory with NVRAMs, in order to make up for the limits of NVRAMs. However, we find that the preexisting page caching algorithms, such as LRU, LIRS, and CLOCK-Pro, are often sub-optimal for NVRAMs due to its DRAM-oriented design including uniform access latency and unlimited endurance. Consequently, the algorithms cannot be directly adapted to the hybrid main memory architecture with PRAM. To mitigate this design limitation, we propose a new page caching algorithm for the hybrid main memory. It is designed to overcome the long latency and low endurance of PRAM. On the basis of the LRU replacement algorithm, we propose a prediction of page access pattern and migration schemes to maintain write-bound access pages to DRAM. The experiment results have convinced us that our page caching algorithm minimizes the number of the write access of PRAM while maintaining the cache hit ratio. The results show that we can reduce the total write access count by a maximum of 52.9% and the consumed energy by 19.9%. Therefore, we can enhance the average page cache performance and reduce the endurance problem in the hybrid main memory.
The advent of manycore in computing architecture causes severe energy consumption and memory wall problem. Emerging technologies such as on-chip DRAM and nonvolatile memory (NVRAM) receive attention as promising solutions for them. Nonvolatile memory is a viable DRAM replacement, achieving competitive performance at lower power consumption. On-chip DRAM extends the memory bandwidth. The confluence of these trends offers a new opportunity to rethink traditional computing system and memory hierarchies.In an attempt to mitigate the energy and memory wall, we propose MN-MATE, a novel architecture and management techniques for resource allocation of a number of cores, onchip DRAM, and large size of off-chip DRAM and NVRAM. In MN-MATE, each guest OS utilizes cores and various memories allocated by the hypervisor. Based on the knowledge about the allocated resources, a guest OS co-schedules tasks accessing different types of memory with complementary access intensity. Memory management system of the OS utilizes on-chip DRAM as a part of main memory having low latency. It also selects proper location of data from the three types of memory based on the data's access characteristics. Preliminary experimental results show that these techniques with the new architecture improve system performance and reduce energy consumption.Index Terms-virtual machine; resource management; scheduling; NVRAM; hybrid main memory;• A hierarchical main memory management: We propose a software-controlled memory management by the IEEE Eighth World Congress on Services978-0-7695-4756-5/12 $26.00
We propose a novel wear-leveling algorithm for the hybrid main memory architecture which exploits both fast read and write speed of DRAM and low power consumption and high density of PRAM. The wear-leveling algorithm consists of three techniques: DRAM buffering for reducing the write count, multiple data swapping for evening out the write count among all pages, and data shifting evening out the write count among all pages and lines. In order to evaluate performance, we implement a PIN-based wear-leveling simulator. In SPEC CPU2006, our proposed schemes can reduce the write count and maintain the write count equally among all pages and lines with little additional overhead.
As the DRAM based main memory significantly increases the power and cost budget of a computer system, new memory technologies such as Phase-change RAM (PRAM), Ferroelectric RAM (FRAM), and Magnetic RAM (MRAM) have been proposed to replace the DRAM. Among these memories, PRAM is the most promising candidate for large scale main memory because of its high density and low power consumption. In previous researches, a hybrid main memory approach of DRAM and PRAM is adopted to make up for the latency and endurance limits of PRAM. On the other hand, large amount of a main memory is used for page cache to hide disk access latency. Many page caching algorithms such as LRU, LIRS, and CLOCK-Pro are developed and show good performance, but these are only consider the main memory with uniform access latency and unlimited endurance. They cannot be directly adapted to the hybrid main memory architecture with PRAM.In this paper, we propose a new page caching algorithm for the hybrid main memory. It is designed to overcome the long latency and low endurance of PRAM. On the basis of the LRU replacement algorithm, we propose page monitoring and migration schemes to keep read-bound access pages to PRAM. The experiment results show that our page caching algorithm minimize the write access of PRAM while maintaining cache hit ratio. The results show that we can maximally reduce the total write access count by 48.4%. Therefore, we can enhance the average page cache performance and reduce the endurance problem in the hybrid main memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.