“…Our work aims to partially address the need to validate non-accuracy-focused algorithms in a fair way (i.e., compared to accuracy-focused algorithms). In addition, our model, which is inspired by validation processes where implicit feedback is available (through direct contact with users) postulates that, if the aim is to evaluate whether a recommender system is accurate in suggesting novel items, this should not be performed (as is usually the case in the literature, see, for example, [1][2][3]14,15]) by providing popular items as test examples, but rather more novel items (thus showing the actual ability of the system to be accurate in recommending novel items), which is precisely the goal of our approach. Note therefore that our model does not claim to deal with biases inherent in existing datasets (such as, for example, the two datasets used in this work-see Section 4.1), or biases inherent in existing algorithms (for such biases, see for example the recent work on fairness for recommender systems [23][24][25]), but rather to deal with popularity bias related to the evaluation processes traditionally applied in the field of recommender systems (i.e., random crossvalidation or bootstrapping), which leads to the fact that the joint comparison of accuracy and another dimension (such as, for example, novelty) by traditional processes is not fair.…”