We analyze a class of stochastic gradient algorithms with momentum on a highdimensional random least squares problem. Our framework, inspired by random matrix theory, provides an exact (deterministic) characterization for the sequence of loss values produced by these algorithms which is expressed only in terms of the eigenvalues of the Hessian. This leads to simple expressions for nearly-optimal hyperparameters, a description of the limiting neighborhood, and average-case complexity. As a consequence, we show that (small-batch) stochastic heavy-ball momentum with a fixed momentum parameter provides no actual performance improvement over SGD when step sizes are adjusted correctly. For contrast, in the non-strongly convex setting, it is possible to get a large improvement over SGD using momentum. By introducing hyperparameters that depend on the number of samples, we propose a new algorithm sDANA (stochastic dimension adjusted Nesterov acceleration) which obtains an asymptotically optimal average-case complexity while remaining linearly convergent in the strongly convex setting without adjusting parameters.Methods that incorporate momentum and acceleration play an integral role in machine learning where they are often combined with stochastic gradients. Two of the most popular methods in this category are the heavy-ball method (HB) [Polyak, 1964] and Nesterov's accelerated method (NAG) [Nesterov, 2004]. These methods are known to achieve optimal convergence guarantees when employed with exact gradients (computed on the full training data set), but in practice, these momentum methods are typically implemented with stochastic gradients. In the influential work Sutskever et al. [2013], the authors demonstrated empirical advantages of augmenting stochastic gradient descent (SGD) with the momentum machinery and, as a result, momentum methods are widely used for training deep neural networks. Yet despite the popularity of these stochastic momentum methods, the theoretical understanding of these algorithms remains rather limited.