Recommender systems are being increasingly used to predict the preferences of users on online platforms and recommend relevant options that help them cope with information overload. In particular, modern model-based collaborative filtering algorithms, such as latent factor models, are considered stateof-the-art in recommendation systems. Unfortunately, these black box systems lack transparency, as they provide little information about the reasoning behind their predictions. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of big data and machine learning methods to a mass audience without compromising trust. Explanations can take a variety of formats, depending on the recommendation domain and the machine learning model used to make predictions. The objective of this work is to build a recommender system that can generate both accurate predictions and semantically rich explanations that justify the predictions. We propose a novel approach to build an explanation generation mechanism into a latent factor-based black box recommendation model. The designed model is trained to learn to make predictions that are accompanied by explanations that are automatically mined from the semantic web. Our evaluation experiments, which carefully study the trade-offs between the quality of predictions and explanations, show that our proposed approach succeeds in producing explainable predictions without a significant sacrifice in prediction accuracy.