DRAM cells leak charge over time, causing stored data to be lost. Therefore, periodic refreshes are required to ensure data integrity. Modern DRAM usually refreshes cells at rank level, resulting in an entire rank being unavailable during a refresh period. As DRAM density keeps increasing, more rows need to be refreshed during a single refresh operation, which causes higher refresh latency and significantly degrades the overall memory system performance.To mitigate DRAM refresh overhead, we propose a caching scheme, called Rank-level Piggyback Caching, or RPC for short, based on the fact that ranks in the same channel are refreshed in a staggered manner. The key idea is to cache the to-be-read data in a rank (e.g. Rank 1) to its adjacent rank (e.g. Rank 2) before Rank 1 is locked for refresh. Each rank reserves or over-provisions a very small area, denoted as a cache region, to store the cached data. The cache regions from all ranks are organized in a rotated fashion. In other words, the cached data for the last rank is stored in the first rank. When a read request arrives at a rank undergoing refresh, the memory controller first checks the cache region in the next rank in the same channel; if the requested data is cached, the memory controller services the request from the cache without waiting for the refresh operation to complete, which reduces memory access latency and improves system performance.Our experimental results show that RPC outperforms existing Fine Granularity Refresh modes. In a single-core and four-rank system, it improves system performance by 8.7% and 10.8% on average for the PARSEC 2.1 and SPLASH-2 benchmark suites, respectively. In a four-core and four-rank system, the improvement of system performance for these two benchmark suites is 8.6% and 12.2%, respectively.
With the explosive increase of the data by volume in various fields of science, engineering, information services, etc., data-intensive computing has gained significant interest in recent years. Various challenges ranging from efficient peta-scale data management to the adoption of highly scalable cloud computing have become a norm for data center administrators. Highly scalable architectures such as Hadoop, BlobSeer and MapR are used in large data centers for efficient data management, and employ 3-way replication for fault tolerance or data availability.One means of reducing storage overhead of replication in data-centers is erasure coding. However, HDFS-RAID (erasurecoded Hadoop) uses large block sizes and does not support update operations. Therefore, changing any file-block content requires recreating the whole file, which effectively reduces the overall write and update performance of the system. We propose FINe Grained ERasure coding scheme (FINGER) for the erasure-coded Hadoop FileSystem, which improves both write and update performance without sacrificing the read performance. The main idea is to chunk the large block size (64 or 128 MB) into smaller chunks; the chunk layout is designed to mitigate extra reads when performing erasure coding on a large block update and maintains the same metdata size as HDFS-RAID. We implement the update operation in Hadoop and conduct testbed experiments to demonstrate that FINGER improves the write and update performance by 38.20% and 8.6% w.r.t. 3-way replication and by 8.08% and up to 5.68× w.r.t HDFS-RAID respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.