More and more data centers are built, consuming ever more kilo watts of energy. Over the years, energy has become a dominant cost factor for data center operators. Utilizing lowpower idle modes is an immediate remedy to reduce data center power consumption. We use simulation to quantify the difference in energy consumption caused exclusively by virtual machine schedulers. Besides demonstrating the inefficiency of wide-spread default schedulers, we present our own optimized scheduler. Using a range of realistic simulation scenarios, our customized scheduler OptSched reduces cumulative machine uptime by up to 60.1%. We evaluate the effect of data center composition, run time distribution, virtual machine sizes, and batch requests on cumulative machine uptime. IaaS administrators can use our results to quickly assess possible reductions in machine uptime and, hence, untapped energy saving potential.