2009
DOI: 10.1109/tc.2008.163
|View full text |Cite
|
Sign up to set email alerts
|

A Highly Accurate Method for Assessing Reliability of Redundant Arrays of Inexpensive Disks (RAID)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0
1

Year Published

2009
2009
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(44 citation statements)
references
References 17 publications
0
43
0
1
Order By: Relevance
“…This new model is based on statistical principles associated with nonHomogeneous Poisson processes, evaluated using Monte Carlo simulation. The simulation results follow field data closely [2,3,4], but this improved accuracy comes at the expense of usability, since simulations require specialized code and extensive computation.…”
Section: Introductionmentioning
confidence: 80%
See 1 more Smart Citation
“…This new model is based on statistical principles associated with nonHomogeneous Poisson processes, evaluated using Monte Carlo simulation. The simulation results follow field data closely [2,3,4], but this improved accuracy comes at the expense of usability, since simulations require specialized code and extensive computation.…”
Section: Introductionmentioning
confidence: 80%
“…Often times, the mean is converted into a more concrete term, the number of double disk failures (DDFs) per year. Unfortunately, the derived number of DDFs per year as a function of time based on MTTDL does not agree with actual field data [2].…”
Section: Introductionmentioning
confidence: 82%
“…A previous study [5] showed that the MTTDL errors are so large that the equation is not even a reasonable approximation. This study shows that even when latent defects are included, RAID group failure frequency depends greatly on the Ld scrub times set by the operating system priorities.…”
Section: Figure 4 -Effects Of Recovery Distributions Group Size = 23mentioning
confidence: 98%
“…HDDs generally do not have constant failure rates [5], and a 2-parameter Weibull distribution has been shown to be a reasonable distributional fit for operational failures. Latent defects are often caused by contamination of other causes inherent to the HDD design [3].…”
Section: Operational Failures and Latent Defectsmentioning
confidence: 99%
“…Mean time to data loss (MTTDL) is a reliability index of storage systems that can easily be calculated and has been widely used in the field [7]. However, some authors report discrepancies between the actual rate of data loss and MTTDL [7], [8].…”
Section: Related Workmentioning
confidence: 99%