Proceedings of the Fifth ACM Conference on Recommender Systems 2011
DOI: 10.1145/2043932.2043993
|View full text |Cite
|
Sign up to set email alerts
|

A pragmatic procedure to support the user-centric evaluation of recommender systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0
4

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 55 publications
(33 citation statements)
references
References 17 publications
0
29
0
4
Order By: Relevance
“…In fact, users provide rather high average rating scores to all the algorithms (KNN: 4.05; graph-based: 4.1; MF: 3.95; all out of 5). Since our sample size was sufficiently large at 20 users per algorithm [5], we suggest this to show that the users were satisfied with the accuracy of the recommendations, regardless of the type of algorithm that generated them. Nevertheless, the user-centric evaluation results ( Figure 5) show that the graph-based recommender received a somewhat greater average rating score for perceived usefulness, novelty and serendipity of the recommendations by users, compared to the other two algorithms.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…In fact, users provide rather high average rating scores to all the algorithms (KNN: 4.05; graph-based: 4.1; MF: 3.95; all out of 5). Since our sample size was sufficiently large at 20 users per algorithm [5], we suggest this to show that the users were satisfied with the accuracy of the recommendations, regardless of the type of algorithm that generated them. Nevertheless, the user-centric evaluation results ( Figure 5) show that the graph-based recommender received a somewhat greater average rating score for perceived usefulness, novelty and serendipity of the recommendations by users, compared to the other two algorithms.…”
Section: Discussionmentioning
confidence: 99%
“…The questionnaire was designed to reflect how actual users perceive and appreciate the recommendations they receive, taking into account important aspects in user perception when running recommender systems' user studies [5], [28]. We asked the participants to answer six short questions by expressing their level of agreement with each of the questions.…”
Section: Questionnairementioning
confidence: 99%
See 2 more Smart Citations
“…Various authors showed that offline and online evaluations often provide contradictory results [93,269,280]. Many papers about various aspects of recommender-system evaluation have been published [226,268,275,280,283,[298][299][300]. However, while many of the findings in these papers are important with respect to reproducibility, none of the authors mentioned or discussed their findings in the context of reproducibility.…”
Section: Reproducibility and The Butterfly Effectmentioning
confidence: 99%