A scalable approach computes in operationally-compatible time the energy dispatch under uncertainty for electrical power grid systems of realistic size and with thousands of scenarios. W e present a scalable computational framework for solving two-stage stochastic optimization problems that arise in power grid optimization under uncertainty. We aim to solve the problem of choosing the optimal operation of electricity generation facilities to produce energy at the lowest cost to reliably serve consumers, recognizing any operational limits of the generation and transmission facilities.In the US, power grid optimization problems are solved by each of the 10 independent system operators. 1 In the form of unit commitment (UC), such problems are the main component of dayahead planning of generators and electricity markets, and currently they're solved in less than one hour. In the form of an economic dispatch (ED), these optimization problems are used to balance supply and demand, and they need to be solved within several minutes. 2 We note that these time windows reflect current practice only, and the evolution of energy operations to include more renewable energy is likely to both increase the problems' size and reduce the time in which they need to be solved. The economic footprint of these issues is enormous; in the US, solving such problems results in dispatch orders to generators worth several billions to tens of billions of dollars per year per independent system operator, for a national total of hundreds of billions of dollars per year. Their critical contribution to the US economy has led such technologies being specifically controlled by law, for example in the Energy Policy Act of 2005, Sections 1298 and 1832.Here, we focus on the computing challenges stemming from one such evolutionary imperative: accounting for energy supply variability by using optimization under uncertainty techniques. 3,4 This results in vastly larger stochastic optimization problems, having several billion variables and constraints. The large size exists because tens of thousands of possible realizations of the uncertainty, also known as scenarios, are needed to accurately characterize the supply variability, and because the number of deterministic UC/ED problems' decision variables and constraints are multiplied by the number of stochastic formulation scenarios.
System ApproachAs the previously defined problems need to be solved within restrictive time limits, a high-end, distributed-memory, supercomputing solution is required. We have developed the PIPS interiorpoint method (PIPS-IPM) optimization solver, which implements an IPM and specialized linear algebra. PIPS-IPM's main computational burden is solving linear systems at each optimization step. Several features of the problem beyond its large size create difficulties in achieving high performance, namely:■ ■The linear system has hybrid sparse and dense features, stemming from the different nature of the two stages of the problem; in addition, the constraint matrix is a mix of pow...