This paper presents a novel approach to learning predictive motor control via "mental simulations". The method, inspired by learning via mental imagery in natural Cognition, develops in two phases: first, the learning of predictive models based on data recorded in the interaction with the environment; then, at a deferred time, the synthesis of inverse models via offline episodic simulations. Parallelism with human-engineered control-theoretic workflow (mathematical modeling the direct dynamics followed by optimal control inversion) is established. Compared to the latter human-directed synthesis, the mental simulation approach increases autonomy: a robotic agent can learn predictive models and synthesize inverse ones with a large degree of independence. Human modeling is still needed but limited to providing efficient templates for the forward and inverse neural networks and a few other directives. One could consider these templates as the efficient brain network typologies that evolution produced to permit live beings quickly and efficiently learning. The structure of the neural networks-both forward and inverse ones-is made of interpretable "local models", which follows the cerebellar organization (and are also similar to local model approaches known in the literature). We demonstrate the learning of a first-round model (contrasted to Model Predictive Control) for lateral vehicle dynamics. Then, we demonstrate a second learning iteration, where the forward/inverse neural models are significantly improved.