“…Another line of works has focused on the development of fast nonconvex algorithms , Lee et al, 2018, Ma et al, 2018, Huang and Hand, 2018, Charisopoulos et al, 2019, which was largely motivated by recent advances in efficient nonconvex optimization for tackling statistical estimation problems [ Candes et al, 2015, Chen and Candès, 2017, Charisopoulos et al, 2021, Keshavan et al, 2009, Jain et al, 2013, Zhang et al, 2016, Chen and Wainwright, 2015, Sun and Luo, 2016, Zheng and Lafferty, 2016, Wang et al, 2017a, Cai et al, 2021b, Wang et al, 2017b, Qu et al, 2017, Duchi and Ruan, 2019, Ma et al, 2019 (see Chi et al [2019] for an overview). proposed a feasible nonconvex recipe by attempting to optimize a regularized squared loss (which includes extra penalty term to promote incoherence), and showed that in conjunction with proper initialization, nonconvex gradient descent converges to the ground truth in the absence of noise.…”