It has become commonplace to observe frequent multiple disk failures in big data centers in which thousands of drives operate simultaneously. Disks are typically protected by replication or erasure coding to guarantee a predetermined reliability. However, in order to optimize data protection, real life disk failure trends need to be modeled appropriately. The classical approach to modeling is to estimate the probability density function of failures using non-parametric estimation techniques such as Kernel Density Estimation (KDE). However, these techniques are suboptimal in the absence of the true underlying density function. Moreover, insufficient data may lead to overfitting. In this study, we propose to use a set of transformations to the collected failure data for almost perfect regression in the transform domain. Then, by inverse transformation, we analytically estimated the failure density through the efficient computation of moment generating functions and hence the density functions. Moreover, we developed a visualization platform to extract useful statistical information such as model-based mean time to failure. Our results indicate that for other heavy-tailed data, complex Gaussian Hypergeometric Distribution (GHD) and classical KDE approach can perform best if overfitting problem can be avoided and complexity burden is overtaken. On the other hand, we show that the failure distribution exhibits less complex Argus-like distribution after performing Box-Cox transformation up to appropriate scaling and shifting operations.