This paper proposes a stabilizing Model Predictive Control algorithm, specifically designed to handle systems learned by Incrementally Input-to-State Stable Recurrent Neural Networks, in presence of input and incremental input constraints. Closed-loop stability is proven by relying on the Incremental Input-to-State Stability property of the model, and on a terminal equality constraint involving the control sequence only. The Incremental Input-to-State Stability is also used to derive a suitable formulation of the Model Predictive Control terminal cost. The proposed control algorithm can be readily applied to a wide range of Recurrent Neural Networks, including Gated Recurrent Units, Echo State Networks, and Neural Nonlinear Autoregressive eXogenous models. Furthermore, this work specializes the approach to handle the particular case of Long Short-Term Memory Networks, and showcases its effectiveness on a four tanks process benchmark.