We study the robustness of accelerated first-order algorithms to stochastic uncertainties in gradient evaluation.Specifically, for unconstrained, smooth, strongly convex optimization problems, we examine the mean-square error in the optimization variable when the iterates are perturbed by additive white noise. This type of uncertainty may arise in situations where an approximation of the gradient is sought through measurements of a real system or in a distributed computation over network. Even though the underlying dynamics of first-order algorithms for this class of problems are nonlinear, we establish upper bounds on the mean-square deviation from the optimal value that are tight up to constant factors. Our analysis quantifies fundamental trade-offs between noise amplification and convergence rates obtained via any acceleration scheme similar to Nesterov's or heavy-ball methods. To gain additional analytical insight, for strongly convex quadratic problems we explicitly evaluate the steady-state variance of the optimization variable in terms of the eigenvalues of the Hessian of the objective function. We demonstrate that the entire spectrum of the Hessian, rather than just the extreme eigenvalues, influence robustness of noisy algorithms. We specialize this result to the problem of distributed averaging over undirected networks and examine the role of network size and topology on the robustness of noisy accelerated algorithms. the steady-state mean error in the objective value and it was shown that, for the same convergence rate, Nesterov-like method with properly-selected parameters can be more robust than gradient descent. This is not surprising because gradient descent can be viewed as a special case of Nesterov's method with a zero momentum parameter. This observation was used in [41] to design an optimal multi-stage algorithm that does not require information about variance of the noise. In this paper, however, we focus on the variance amplification of the iterates (and not the function value) and discuss the connections between the two robustness measures. We show that any choice of parameters for Nesterov's or heavy-ball methods that yields an accelerated convergence rate increases variance amplification relative to gradient descent. More precisely, for the problem with the condition number κ, an algorithm with accelerated convergence rate of at least 1 − c/ √ κ, where c is a positive constant, increases the variance amplification in the iterates by a factor of √ κ. The robustness problem was also studied in [42] where the authors show similar behavior of Nesterov's method and gradient descent in an asymptotic regime in which the stepsize May 28, 2019 DRAFT 4) We extend our analysis from quadratic objective functions to general strongly convex problems. We borrow an approach based on linear matrix inequalities from control theory to establish upper bounds on the noise amplification of both gradient descent and Nesterov's accelerated algorithm. Furthermore, for any given condition number, we demonstrate th...