The graphics processing unit (GPU) is used to solve large linear systems derived from partial differential equations. The differential equations studied are strongly convection-dominated, of various sizes, and common to many fields, including computational fluid dynamics, heat transfer, and structural mechanics. The paper presents comparisons between GPU and CPU implementations of several well-known iterative methods, including Kaczmarz's, Cimmino's, component averaging, conjugate gradient normal residual (CGNR), symmetric successive overrelaxation-preconditioned conjugate gradient, and conjugate-gradient-accelerated component-averaged row projections (CARP-CG). Computations are preformed with dense as well as general banded systems. The results demonstrate that our GPU implementation outperforms CPU implementations of these algorithms, as well as previously studied parallel implementations on Linux clusters and shared memory systems. While the CGNR method had begun to fall out of favor for solving such problems, for the problems studied in this paper, the CGNR method implemented on the GPU performed better than the other methods, including a cluster implementation of the CARP-CG method.
The scaling of linear optimization problems, while poorly understood, is definitely not devoid of techniques. Scaling is the most common preconditioning technique utilized in linear optimization solvers, and is designed to improve the conditioning of the constraint matrix and decrease the computational effort for solution. Most importantly, scaling provides a relative point of reference for absolute tolerances. For instance, absolute tolerances are used in the simplex algorithm to determine when a reduced cost is considered to be nonnegative. Existing techniques for obtaining scaling factors for linear systems are investigated herein. With a focus on the impact of these techniques on the performance of the simplex method, we analyze the results obtained from over half a billion simplex computations with CPLEX, MINOS and GLPK, including the computation of the condition number at every iteration. Some of the scaling techniques studied are computationally more expensive than others. For the Netlib and Kennington problems considered herein, it is found that on average no scaling technique outperforms the simplest technique (equilibration) despite the added complexity and computational cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.