Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.
Online communities are increasingly important to organizations and the general public, but there is little theoretically based research on what makes some online communities more successful than others. In this article, we apply theory from the field of social psychology to understand how online communities develop member attachment, an important dimension of community success. We implemented and empirically tested two sets of community features for building member attachment by strengthening either group identity or interpersonal bonds. To increase identity-based attachment, we gave members information about group activities and intergroup competition, and tools for group-level communication. To increase bond-based attachment, we gave members information about the activities of individual members and interpersonal similarity, and tools for interpersonal communication. Results from a six-month field experiment show that participants' visit frequency and self-reported attachment increased in both conditions. Community features intended to foster identity-based attachment had stronger effects than features intended to foster bond-based attachment. Participants in the identity condition with access to group profiles and repeated exposure to their group's activities visited their community twice as frequently as participants in other conditions. The new features also had stronger effects on newcomers than on old-timers. This research illustrates how theory from the social science literature can be applied to gain a more systematic understanding of online communities and how theory-inspired features can improve their success.
Under-contribution is a problem for many online communities. Social psychology theories of social loafing and goal-setting can provide mid-level design principles to address this problem. We tested the design principles in two field experiments. In one, members of an online movie recommender community were reminded of the uniqueness of their contributions and the benefits that follow from them. In the second, they were given a range of individual or group goals for contribution. As predicted by theory, individuals contributed when they were reminded of their uniqueness and when they were given specific and challenging goals, but other predictions were not borne out. The paper ends with suggestions and challenges for mining social science theories as well as implications for design.
Wikipedia's brilliance and curse is that any user can edit any of the encyclopedia entries. We introduce the notion of the impact of an edit, measured by the number of times the edited version is viewed. Using several datasets, including recent logs of all article views, we show that an overwhelming majority of the viewed words were written by frequent editors and that this majority is increasing. Similarly, using the same impact measure, we show that the probability of a typical article view being damaged is small but increasing, and we present empirically grounded classes of damage. Finally, we make policy recommendations for Wikipedia and other wikis in light of these findings.
For many topics, the World Wide Web contains hundreds or thousands of relevant documents of widely varying quality. Users face a daunting challenge in identifying a small subset of documents worthy of their attention. Link analysis algorithms have received much interest recently, in large part for their potential to identify high quality items.We report here on an experimental evaluation of this potential. We evaluated a number of link and content-based algorithms using a dataset of web documents rated for quality by human topic experts. Link-based metrics did a good job of picking out high-quality items. Precision at 5 is about 0.75, and precision at 10 is about 0.55; this is in a dataset where 0.32 of all documents were of high quality. Surprisingly, a simple content-based metric performed nearly as well; ranking documents by the total number of pages on their containing site.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.