This work proposes an accelerated first-order algorithm we call the Robust Momentum Method for optimizing smooth strongly convex functions. The algorithm has a single scalar parameter that can be tuned to trade off robustness to gradient noise versus worst-case convergence rate. At one extreme, the algorithm is faster than Nesterov's Fast Gradient Method by a constant factor but more fragile to noise. At the other extreme, the algorithm reduces to the Gradient Method and is very robust to noise. The algorithm design technique is inspired by methods from classical control theory and the resulting algorithm has a simple analytical form. Algorithm performance is verified on a series of numerical simulations in both noise-free and relative gradient noise cases.Notation. The set of functions that are m-strongly convex and L-smooth is denoted F(m, L). In particular, f ∈ F(m, L) if for all x, y ∈ R n ,The condition ratio is defined as κ := L/m.2 A numerical study in [3] revealed that the standard rate bound for FGM derived in [2] is conservative. Nevertheless, the bound has a simple algebraic form and is asymptotically tight.