Recovering matrices from compressive and grossly corrupted observations is a fundamental problem in robust statistics, with rich applications in computer vision and machine learning. In theory, under certain conditions, this problem can be solved in polynomial time via a natural convex relaxation, known as Compressive Principal Component Pursuit (CPCP). However, many existing provably convergent algorithms for CPCP suffer from superlinear per-iteration cost, which severely limits their applicability to large-scale problems. In this paper, we propose provably convergent, scalable and efficient methods to solve CPCP with (essentially) linear per-iteration cost. Our method combines classical ideas from Frank-Wolfe and proximal methods. In each iteration, we mainly exploit Frank-Wolfe to update the low-rank component with rank-one SVD and exploit the proximal step for the sparse term. Convergence results and implementation details are discussed. We demonstrate the practicability and scalability of our approach with numerical experiments on visual data.
AMS subject classifications. 90C06, 90C25, 90C52(1.3) minThis convex surrogate is sometimes referred to as compressive principal component pursuit (CPCP) [1]. Equivalently, since). JW was funded by ONR-N00014-13-0492. arXiv:1403.7588v2 [math.OC] 29 May 2017 * To transform problem (1.3) into problem (1.4), simple procedures like Gram-chmidt might be invoked. Despite being equivalent, one formulation might be preferred over the other in practice, depending on the specifications of the sensing operator A[·]. In this paper, we will mainly focus on solving problem (1.4) and its variants. Our methods, however, are not restrictive to (1.4) and can be easily extended to problem (1.3).