In this paper, we present the use of Reinforcement Learning (RL) based on Robust Model Predictive Control (RMPC) for the control of an Autonomous Surface Vehicle (ASV). The RL-MPC strategy is utilized for obstacle avoidance and target (set-point) tracking. A scenario-tree robust MPC is used to handle potential failures of the ship thrusters. Besides, the wind and ocean current are considered as unknown stochastic disturbances in the real system, which are handled via constraints tightening. The tightening and other cost parameters are adjusted by RL, using a Q-learning technique. An economic cost is considered, minimizing the time and energy required to achieve the ship missions. The method is illustrated in simulation on a nonlinear 3-DOF model of a scaled version of the Cybership II.