Abstract• A standard Gaussian random matrix has full rank with probability 1 and is wellconditioned with a probability quite close to 1 and converging to 1 fast as the matrix deviates from square shape and becomes more rectangular.• If we append sufficiently many standard Gaussian random rows or columns to any matrix A, such that ||A|| = 1, then the augmented matrix has full rank with probability 1 and is well-conditioned with a probability close to 1, even if the matrix A is rank deficient or ill-conditioned.• We specify and prove these properties of augmentation and extend them to additive preprocessing, that is, to adding a product of two rectangular Gaussian matrices.• By applying our randomization techniques to a matrix that has numerical rank ρ, we accelerate the known algorithms for the approximation of its leading and trailing singular spaces associated with its ρ largest and with all its remaining singular values, respectively.• Our algorithms use much fewer random parameters and run much faster when various random sparse and structured preprocessors replace Gaussian. Empirically the outputs of the resulting algorithms is as accurate as the outputs under Gaussian preprocessing.• Our novel duality techniques provides formal support, so far missing, for these empirical observations and opens door to derandomization of our preprocessing and to further acceleration and simplification of our algorithms by using more efficient sparse and structured preprocessors.• Our techniques and our progress can be applied to various other fundamental matrix computations such as the celebrated low-rank approximation of a matrix by means of random sampling.
1Key Words: Randomized matrix algorithms; Gaussian random matrices; Singular spaces of a matrix; Duality; Derandomization; Sparse and structured preprocessors 1 Introduction
Randomized augmentation: outlineA standard Gaussian m × n random matrix, G (hereafter referred to just as Gaussian), has full rank with probability 1 (see Theorem B.1). Furthermore the expected spectral norms ||G|| and ||G + ||, G + denoting the Moore-Penrose generalized inverse, satisfy the following estimates (see Theorems B.2 and B.3):• E(||G||) ≈ 2 √ h, for h = max{m, n}, and|m−n| provided that l = min{m, n}, m = n, and e = 2.71828 . . . . Thus, for moderate or reasonably large integers m and n, the matrix G can be considered wellconditioned with the confidence growing fast as the integer |m − n| increases from 0. By virtue of part 2 of Theorem B.3, the matrix G can be viewed as well-conditioned even for m = n, although with a grain of salt, depending on context. Motivated by this information, we append sufficiently but reasonably many Gaussian rows or columns to any matrix A, possibly rank deficient or ill-conditioned, but normalized, such that ||A|| = 1. (Our approach requires attention to various pitfalls, and in particular it fails without normalization of an input matrix.) Then we prove that the cited properties of a Gaussian matrix also hold for the augmented matrix K and similarly for th...