We propose a vector auto-regressive (VAR) model with a low-rank constraint on the transition matrix. This new model is well suited to predict high-dimensional series that are highly correlated, or that are driven by a small number of hidden factors. We study estimation, prediction, and rank selection for this model in a very general setting. Our method shows excellent performances on a wide variety of simulated datasets. On macro-economic data from Giannone et al. (2015), our method is competitive with state-of-the-art methods in small dimension, and even improves on them in high dimension.(1995); Francq and Zakoian (2019). In this paper, we propose a vector auto-regressive (VAR) model that is suitable to predict high-dimensional series that are strongly correlated, or that are driven by a reasonable number of hidden factors. These features are captured by imposing a low-rank constraint on the transition matrix. The coefficients can be efficiently computed by convex optimization techniques.Let us briefly describe the motivation for this model. Assume we deal with an R M -valued process (X t ) t≥0 withfor a very large M . For examples, think of daily sales of items on Amazon, where an item might be iPhone 7 128Go black, Lord of the Rings -Harper Collins Box Set, 1991 or Estimation of Dependences Based on Empirical Data: Second Edition, by Vladimir Vapnik, Springer.Then it is obvious that even with a few years of observations, we observe at most X t for a few thousands of days t while M is probably of the order or 10 5 or 10 6 . Thus the estimation of the M 2 coefficients of the matrix A is impossible. Some constraints are necessary to reduce the dimension of the problem. We believe that the sparsity of A, studied by Davis et al. (2016) in another context, does not make sense here.On the other hand, it is clear that a few factors, like the current economic conditions, the period of the year, have a strong influence on the series. Assuming these factors H t are linear functions of X t , we can write them asThen, assuming that X t+1 can be linearly predicted by H t , we can predict X t+1 by U H t = U V X t for some M × r matrix U . At the end of the day, we indeed predict X t+1 by (U V )X t = AX t , but the rank of A is r M . Note that the assumption that the coefficient matrix A is low-rank in a multivariate regression model Y i = AX i +ξ i where Y i ∈ R s and X i ∈ R t was studied in Econometric theory as early as in the 50's Anderson (1951);Izenman (1975). It was referred to as RRR (reduced-rank regression). We refer the reader to Koltchinskii et al. (2011); Suzuki (2015); Alquier et al. (2017); Klopp et al. (2017a,b); Moridomi et al. (2018) for state-of-the-art results. Low-rank matrices were actually used to model high-dimensional time series by De Castro et al. (2017) and Alquier and Marie (2019), however, the models described in these papers cannot be straightforwardly used for prediction purposes. Here, we study estimation and prediction for the model (1).The paper is designed as follows. In the end of the in...