Design/methodology/approach -Average Mendeley reader counts were compared to average Scopus citation counts for 104520 articles from ten disciplines during the second half of 2016. Findings -Articles attracted, on average, between 0.1 and 0.8 Mendeley readers per article in the month in which they first appeared in Scopus. This is about ten times more than the average Scopus citation count. Research limitations/implications -Other subjects may use Mendeley more or less than the ten investigated here. The results are dependent on Scopus's indexing practices, and Mendeley reader counts can be manipulated and have national and seniority biases. Practical implications -Mendeley reader counts during the month of publication are more powerful than Scopus citations for comparing the average impacts of groups of documents but are not high enough to differentiate between the impacts of typical individual articles. Originality/value -This is the first multi-disciplinary and systematic analysis of Mendeley reader counts from the publication month of an article.
IntroductionAcademic research is evaluated for appointment, promotion, tenure, for university league tables, for national research evaluation exercises and for self-reflection purposes. Some of these use quantitative data or are supported by numerical evidence of impact. Citation counts to refereed journal articles are a common source of this quantitative data, including in the form of Journal Impact Factors (JIFs) and field normalised citation counts (Garfield, 2006;Waltman, van Eck, van Leeuwen, Visser, & van Raan, 2011;Wilsdon, et al., 2015). Citation counts are not suitable for helping to evaluate new research because articles may take three years to attract a substantial number of citations due to publication delays. For this reason, formal evaluations often use a citation window of considerable length, such as three years (Wang, 2013), which excludes newer articles from evaluations. This means that the most recent and, therefore, most relevant research cannot be evaluated with the help of most citation-based indicators because they cannot differentiate effectively between different levels of impact for individual articles.Two solutions to this problem are to use publishing journal JIFs (or journal rankings: Kulczycki, 2017) as a proxy for citation impact or to use web-based early impact indicators. JIFs can avoid citing article publication delays if it is accepted that the average impact of a journal is an appropriate proxy for the impact of its articles (but see: Lozano, Larivière, & Gingras, 2012; and note also the time dimension: Larivière, Archambault, & Gingras, 2008) 1 Thelwall, M. (in press). Are Mendeley reader counts high enough for research evaluations when articles are published? Aslib Journal of Information Management, 69(2).