Nonlinear acceleration algorithms improve the performance of iterative methods, such as gradient descent, using the information contained in past iterates. However, their efficiency is still not entirely understood even in the quadratic case. In this paper, we clarify the convergence analysis by giving general properties that share several classes of nonlinear acceleration: Anderson acceleration (and variants), quasi-Newton methods (such as Broyden Type-I or Type-II, SR1, DFP, and BFGS) and Krylov methods (Conjugate Gradient, MINRES, GMRES). In particular, we propose a generic family of algorithms that contains all the previous methods and prove its optimal rate of convergence when minimizing quadratic functions. We also propose multi-secants updates for the quasi-Newton methods listed above. We provide a Matlab code implementing the algorithm.Other techniques such as Quasi-Newton methods schemes, popular in optimization, approximate the Newton step using the matrix H ≈ (∇ 2 f (x i )) −1 as follow,This can be extended to fixed-point iteration by coupling a fixed-point step with a Quasi-Newton step,Such matrix H can be found using several formulas. The simplest ones are Broyden Type-I and Type-II updates [3], and the most popular is certainly BFGS or of DFP [17]. There also exists the symmetric rank-one update which has been rediscovered many time in many different fields.Finally, we study Krylov subspace techniques such as the Conjugate Gradient method and GMRES [21]. These algorithms minimize some error function using a Krylov basis, usually updated with orthonormal vectors to ensure stability. Their primary usage is solving large systems of linear equations and optimizing quadratic functions.The optimal convergence rate of Krylov methods is well-known when the fixed-point operator g is a linear mapping, and works of [26] show similar performance for Anderson Acceleration. For quasi-Newton methods, the results are less clear, even for quadratic objectives with two variables. For example, DFP and BFGS algorithms may converge poorly without line-search [20].When the function g is nonlinear, it is unclear how fast those methods converge. In particular, the bad theoretical rates of convergence (if any) does not match the usual good numerical performance. The lack of robustness of nonlinear acceleration algorithms can explain this phenomenon since instability issues are known for some of them [19,14,23].With recent result from [23,24,27], it is now possible to have nonlinear acceleration techniques that achieve an asymptotically optimal rate of convergence even in the presence of stochastic noise. However, because the analysis of nonlinear acceleration methods is independent of each other, we unify the analysis to identify the central argument of nonlinear acceleration.Several results linked some acceleration methods to each other. For example [7] propose a general family of Broyden methods, including Type-I or Type-II updates, as well as Anderson mixing. Walker and Ni [28] show the link between Anderson and GMRES. ...