Data storage systems and their availability play a crucial role in contemporary datacenters. Despite using mechanisms such as automatic fail-over in datacenters, the role of human agents and consequently their destructive errors is inevitable. Due to very large number of disk drives used in exascale datacenters and their high failure rates, the disk subsystem in storage systems has become a major source of Data Unavailability (DU) and Data Loss (DL) initiated by human errors. In this paper, we investigate the effect of Incorrect Disk Replacement Service (IDRS) on the availability and reliability of data storage systems. To this end, we analyze the consequences of IDRS in a disk array, and conduct Monte Carlo simulations to evaluate DU and DL during mission time. The proposed modeling framework can cope with a) different storage array configurations and b) Data Object Survivability (DOS), representing the effect of system level redundancies such as remote backups and mirrors. In the proposed framework, the model parameters are obtained from industrial and scientific reports alongside field data which have been extracted from a datacenter operating with 70 storage racks. The results show that ignoring the impact of IDRS leads to unavailability underestimation by up to three orders of magnitude. Moreover, our study suggests that by considering the effect of human errors, the conventional beliefs about the dependability of different Redundant Array of Independent Disks (RAID) mechanisms should be revised. The results show that RAID1 can result in lower availability compared to RAID5 in the presence of human errors. The results also show that employing automatic fail-over policy (using hot spare disks) can reduce the drastic impacts of human errors by two orders of magnitude.
In recent years, high availability and reliability of Data Storage Systems (DSS) have been significantly threatened by soft errors occurring in storage controllers. Due to their specific functionality and hardware-software stack, error propagation and manifestation in DSS is quite different from general-purpose computing architectures. To our knowledge, no previous study has examined the system-level effects of soft errors on the availability and reliability of data storage systems. In this paper, we first analyze the effects of soft errors occurring in the server processors of storage controllers on the entire storage system dependability. To this end, we implemented the major functions of a typical data storage system controller, running on a full stack of storage system operating system, and developed a framework to perform fault injection experiments using a full system simulator. We then propose a new metric, Storage System Vulnerability Factor (SSVF), to accurately capture the impact of soft errors in storage systems. By conducting extensive experiments, it is revealed that depending on the controller configuration, up to 40% of cache memory contains end-user data where any unrecoverable soft errors in this part will result in Data Loss (DL) in an irreversible manner. However, soft errors in the rest of cache memory filled by Operating System (OS) and storage applications will result in Data Unavailability (DU) at the storage system level. Our analysis also shows that Detectable Unrecoverable Errors (DUEs) on the cache data field are the major cause of DU in storage systems, while Silent Data Corruptions (SDCs) in the cache tag and data field are mainly the cause of DL in storage systems.
This paper presents a high level error detection and correction method called HVD code to tolerate multiple bit upsets (MBUs) occurred in memory cells. The proposed method uses parity codes in four directions in a data part to assure the reliability of memories. The proposed method is very powerful in error detection while its error correction coverage is also acceptable considering its low computing latency. HVD code is useful for applications whose high error detection coverage is very important such as memory systems. Of course, this code can be used in combination with other protection codes which have high correction coverage and low detection coverage. The proposed method is evaluated using more than one billion multiple fault injection experiments. Multiple bit flips were randomly injected in different segments of a memory system and the fault detection and correction coverages are calculated. Results show that 100% of the injected faults can be detected. We proved that, this method can correct up to three bit upsets. Some hardware implementation issues are investigated to show tradeoffs between different implementation parameters of HVD method.
In this paper, we investigate the effect of incorrect disk replacement service on the availability of data storage systems. To this end, we first conduct Monte Carlo simulations to evaluate the availability of disk subsystem by considering disk failures and incorrect disk replacement service. We also propose a Markov model that corroborates the Monte Carlo simulation results. We further extend the proposed model to consider the effect of automatic disk fail-over policy. The results obtained by the proposed model show that overlooking the impact of incorrect disk replacement can result up to three orders of magnitude unavailability underestimation. Moreover, this study suggests that by considering the effect of human errors, the conventional believes about the dependability of different RAID mechanisms should be revised. The results show that in the presence of human errors, RAID1 can result in lower availability compared to RAID5.
Using Error Detection Code (EDC) and Error Correction Code (ECC) is a noteworthy way to increase cache memories robustness against soft errors. EDC enables detecting errors in cache memory while ECC is used to correct erroneous cache blocks. ECCs are often costly as they impose considerable area and energy overhead on cache memory. Reducing this overhead has been the subject of many studies. In particular, a previous study has suggested mapping ECC to the main memory at the expense of high cache traffic and energy. A major source of this excessive traffic and energy is the high frequency of cache writes. In this work, we show that a significant portion of cache writes are silent, i.e., they write the same data already existing. We build on this observation and introduce Traffic-aware ECC (or simply TCC). TCC detects silent writes by an efficient mechanism. Once such writes are detected updating their ECC is avoided effectively reducing L2 cache traffic and access frequency. Using our solution, we reduce L2 cache access frequency by 8% while maintaining performance. We reduce L2 cache dynamic and overall cache energy by up to 32% and 8%, respectively. Furthermore, TCC reduces L2 cache miss rate by 3%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.