Abstract-Recommender systems provide users with content they might be interested in. Conventionally, recommender systems are evaluated mostly by using prediction accuracy metrics only. But the ultimate goal of a recommender system is to increase user satisfaction. Therefore, evaluations that measure user satisfaction should be also performed before deploying a recommender system to a real target environment. Such evaluations are laborious and complicated compared to the traditional, data-centric evaluations, though. In this study, we investigate the added value of user-centric evaluations and how user satisfaction of a recommender system is related to its performance in terms of accuracy metrics. We conduct both a data-centric evaluation and a user-centric evaluation on the same data collected from an authentic social learning platform. Our findings suggest that user-centric evaluation results are not necessarily in line with data-centric evaluation results. We conclude that the traditional evaluation of recommender systems in terms of prediction accuracy does not suffice to judge performance of recommender systems on the user side. Moreover, the user-centric evaluation provides valuable insights on how candidate algorithms perform on each of the five quality metrics: usefulness, accuracy, novelty, diversity, and serendipity of the recommendations.