Summary
Current large‐scale systems, like datacenters and supercomputers, are facing an increasing electricity consumption. These infrastructures are often dimensioned according to the workload peak. However, as their consumption is not power‐proportional when the workload is low, the power consumption is still high. Shutdown techniques have been developed to adapt the number of switched‐on servers to the actual workload. However, datacenter operators are reluctant to adopt such approaches because of their potential impact on reactivity and hardware failures, and their energy gain is often largely misjudged. In this article, we evaluate the potential gain of shutdown techniques by taking into account shutdown and boot up costs in time and energy. This evaluation is made on recent server architectures and future energy‐aware architectures. Our simulations exploit real traces collected on production infrastructures under various machine configurations with several shutdown policies, with and without workload prediction. We study the impact of future's knowledge for saving energy with such policies. Finally, we examine the energy benefits brought by suspend‐to‐disk and suspend‐to‐RAM techniques, and we study the impact of shutdown techniques on the energy consumption of prospective hardware with heterogeneous processors (big‐medium‐little paradigm).