The number of failures occurring in large-scale high performance computing (HPC) systems is significantly increasing due to the large number of physical components found on the system. Fault tolerance (FT) mechanisms help parallel applications mitigate the impact of failures. However, using such mechanisms requires additional overhead. As such, failure prediction is needed in order to smartly utilize FT mechanisms. Hence, the proficiency of a failure prediction determines the efficiency of FT mechanism utilization. The proficiency of a failure predictor in HPC is usually designated by well-known error measurements, e.g. MSE, MAD, precision and recall, in which less error infers the greater proficiency. In this manuscript, we propose to view prediction proficiency from another aspect-lost computing time. We then discuss the insufficiency of error measurements as HPC failure prediction proficiency metrics from the aspect of lost computing time, and propose novel metrics that address these issues.
I. INTRODUCTIONHigh performance computing (HPC) systems are continually accelerating performance by rapidly increasing the number of floating point operations per second (FLOPS) they are capable of processing. To achieve this, these systems continue to incorporate faster and more robust computing components. Since processor clock speed has reached a plateau due to technical limitations, recent HPC system development has tended towards multiplying the number of computing components to achieve higher performance. Unfortunately, this comes with a price. While peak performance capability is increasing due to this increase in the number of components, the number of failures in such systems is also increasing considerably [16].Fault tolerance (FT) mechanisms such as process migration [9] and checkpoint/restart [8] have been introduced to the HPC community to mitigate the effect of failures. Even though these FT mechanisms can help HPC applications mitigate the effect of failures, they also increase the application's overhead. In addition, overexploiting FT mechanisms can bring a great deal of burden to the applications. Knowing when and where a failure will happen beforehand would help HPC applications in appropriately utilizing FT mechanisms. This is where failure prediction plays an important role by giving guidance to running parallel applications in regards to when a failure will possibly take place so that the applications will invoke preferable FT mechanisms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.