Many robotic tasks, such as human-robot interactions or the handling of fragile objects, require tight control and limitation of appearing forces and moments alongside sensible motion control to achieve safe yet highperformance operation. We propose a learning-supported model predictive force and motion control scheme that provides stochastic safety guarantees while adapting to changing situations. Gaussian processes are used to learn the uncertain relations that map the robot's states to the forces and moments. The model predictive controller uses these Gaussian process models to achieve precise motion and force control under stochastic constraint satisfaction. As the uncertainty only occurs in the static model parts -the output equations -a computationally efficient stochastic MPC formulation is used. Analysis of recursive feasibility of the optimal control problem and convergence of the closed loop system for the static uncertainty case are given. Chance constraint formulation and back-offs are constructed based on the variance of the Gaussian process to guarantee safe operation. The approach is illustrated on a lightweight robot in simulations and experiments.