Proceedings of the 8th ACM Conference on Recommender Systems 2014
DOI: 10.1145/2645710.2645737
|View full text |Cite
|
Sign up to set email alerts
|

User perception of differences in recommender algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
68
1

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 134 publications
(72 citation statements)
references
References 21 publications
3
68
1
Order By: Relevance
“…Seen in this way, minimal necessary data is a continuation of an existing evolution. Ensuring User-centered Recommendation Recently, research studies have demonstrated that algorithm accuracy does not necessary play a dominant role in the reception of a recommender system by users [7,10]. If performance improvements achieved by using more data to train a recommender system are too slight or subtle for users to notice, the additional data is adding no value, and should not be used.…”
Section: Background and Motivationmentioning
confidence: 99%
“…Seen in this way, minimal necessary data is a continuation of an existing evolution. Ensuring User-centered Recommendation Recently, research studies have demonstrated that algorithm accuracy does not necessary play a dominant role in the reception of a recommender system by users [7,10]. If performance improvements achieved by using more data to train a recommender system are too slight or subtle for users to notice, the additional data is adding no value, and should not be used.…”
Section: Background and Motivationmentioning
confidence: 99%
“…We compute novelty by computing the mean popularity rank of the items recommended to the user. Finally, coverage is computed by calculating the percentage of pairs of < user , mo ie > for which we can predict a rating [2,15,17]. All metrics range from 0% to 100%, where 100% is the best.…”
Section: Study A: O Line Experimentsmentioning
confidence: 99%
“…Perceived quality is the degree to which the users judge recommendations positively and appreciate the overall experience with the recommender system. To be er scope our research, we operationalize this notion in terms of fours metrics: perceived accuracy, novelty, diversity, and overall user satisfaction as de ned in [15,22,27]: Perceived accuracy (also called Relevance) measures how much the recommendations match the users' interests, preferences and tastes; Diversity measures how much users perceive recommendations as di erent from each other, e.g. movies from different genres; Novelty measures the extent to which users receive new recommended movies; Overall Users' Satisfaction measures the global users' feeling of the experience with the recommender system.…”
Section: Study B: Online Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…On-line volunteers are participants who voluntarily responded to a call for participants, via e-mail or post mail. The number of participants often depends on the credibility of the experiment conductor: if the conductor can obtain the cooperation of a real service provider, a relatively large number of users tend to become involved [21], [96]. Recently, a decent number of users can be gathered through cloud sourcing, such as that through Amazon Mechanical Turk (https://www.mturk.com).…”
Section: Small-sized User Studiesmentioning
confidence: 99%