This paper describes practical randomized algorithms for low-rank matrix approximation that accommodate any budget for the number of views of the matrix. The presented algorithms, which are aimed at being as pass efficient as needed, expand and improve on popular randomized algorithms targeting efficient low-rank reconstructions. First, a more flexible subspace iteration algorithm is presented that works for any views v ≥ 2, instead of only allowing an even v. Secondly, we propose more general and more accurate singlepass algorithms. In particular, we propose a more accurate memory efficient single-pass method and a more general single-pass algorithm which, unlike previous methods, does not require prior information to assure near peak performance. Thirdly, combining ideas from subspace and single-pass algorithms, we present a more passefficient randomized block Krylov algorithm, which can achieve a desired accuracy using considerably fewer views than that needed by a subspace or previously studied block Krylov methods. However, the proposed accuracy enhanced block Krylov method is restricted to large matrices that are either accessed a few columns or rows at a time. Recommendations are also given on how to apply the subspace and block Krylov algorithms when estimating either the dominant left or right singular subspace of a matrix, or when estimating a normal matrix, such as those appearing in inverse problems. Computational experiments are carried out that demonstrate the applicability and effectiveness of the presented algorithms.
Motivation.The primary motivation for this study was to develop algorithms to speed up inverse methods used to estimate parameters in models describing subsurface flow in geothermal reservoirs [36,3]. Inverting models describing complex geophysical processes, such as fluid flow in the subsurface, frequently involves matching a large data set using highly parameterized computational models. Running the model commonly involves solving an expensive and nonlinear forward problem. Despite the possible nonlinearity of the forward problem, the link between the model parameters and simulated observations is often described in terms of a Jacobian matrix J ∈ R N d ×N m , which locally linearizes the relationship between the parameters and observations. The size of J is therefore determined by the (large) parameter and observation spaces. In this case, explicitly forming J is out of the question since at best it involves solving N m direct problems (linearized forward simulations) or N d adjoint problems (linearized backward simulations) [6,23,35,38]. Nevertheless, the information contained in J can be helpful for the purpose of inverting the model using nonlinear inversion methods such as a Gauss-Newton or Levenberg-Marquardt approach, and for quantifying uncertainty.Using adjoint simulation, direct simulation and randomized algorithms, the necessary information can be extracted from J without ever explicitly forming the large matrix J . Bjarkason et al. [3] showed that inversion of a nonlinea...