We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work.
A scalable approach computes in operationally-compatible time the energy dispatch under uncertainty for electrical power grid systems of realistic size and with thousands of scenarios. W e present a scalable computational framework for solving two-stage stochastic optimization problems that arise in power grid optimization under uncertainty. We aim to solve the problem of choosing the optimal operation of electricity generation facilities to produce energy at the lowest cost to reliably serve consumers, recognizing any operational limits of the generation and transmission facilities.In the US, power grid optimization problems are solved by each of the 10 independent system operators. 1 In the form of unit commitment (UC), such problems are the main component of dayahead planning of generators and electricity markets, and currently they're solved in less than one hour. In the form of an economic dispatch (ED), these optimization problems are used to balance supply and demand, and they need to be solved within several minutes. 2 We note that these time windows reflect current practice only, and the evolution of energy operations to include more renewable energy is likely to both increase the problems' size and reduce the time in which they need to be solved. The economic footprint of these issues is enormous; in the US, solving such problems results in dispatch orders to generators worth several billions to tens of billions of dollars per year per independent system operator, for a national total of hundreds of billions of dollars per year. Their critical contribution to the US economy has led such technologies being specifically controlled by law, for example in the Energy Policy Act of 2005, Sections 1298 and 1832.Here, we focus on the computing challenges stemming from one such evolutionary imperative: accounting for energy supply variability by using optimization under uncertainty techniques. 3,4 This results in vastly larger stochastic optimization problems, having several billion variables and constraints. The large size exists because tens of thousands of possible realizations of the uncertainty, also known as scenarios, are needed to accurately characterize the supply variability, and because the number of deterministic UC/ED problems' decision variables and constraints are multiplied by the number of stochastic formulation scenarios. System ApproachAs the previously defined problems need to be solved within restrictive time limits, a high-end, distributed-memory, supercomputing solution is required. We have developed the PIPS interiorpoint method (PIPS-IPM) optimization solver, which implements an IPM and specialized linear algebra. PIPS-IPM's main computational burden is solving linear systems at each optimization step. Several features of the problem beyond its large size create difficulties in achieving high performance, namely:■ ■The linear system has hybrid sparse and dense features, stemming from the different nature of the two stages of the problem; in addition, the constraint matrix is a mix of pow...
We present PIPS-NLP, a software library for the solution of large-scale structured nonconvex optimization problems on high-performance computers. We discuss the features of the implementation in the context of electrical power and gas network systems. We illustrate how different model structures arise in these domains and how these can be exploited to achieve high computational efficiency. Using computational studies from security-constrained ACOPF and line-pack dispatch in natural gas networks, we demonstrate robustness and scalability.
Abstract. In this article we construct and analyze multigrid preconditioners for discretizations of operators of the form D λ + K * K, where D λ is the multiplication with a relatively smooth function λ > 0 and K is a compact linear operator. These systems arise when applying interior point methods to the minimization problem minu 1 2 (||Ku − f || 2 + β||u|| 2 ) with box-constraints u u u on the controls. The presented preconditioning technique is closely related to the one developed by Drȃgȃnescu and Dupont in [13] for the associated unconstrained problem, and is intended for large-scale problems. As in [13], the quality of the resulting preconditioners is shown to increase as h ↓ 0, but decreases as the smoothness of λ declines. We test this algorithm first on a Tikhonovregularized backward parabolic equation with box-constraints on the control, and then on a standard elliptic-constrained optimization problem. In both cases it is shown that the number of linear iterations per optimization step, as well as the total number of fine-scale matrix-vector multiplications is decreasing with increasing resolution, thus showing the method to be potentially very efficient for truly large-scale problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.