F1000 recommendations were assessed as a potential data source for research evaluation, but the reasons for differences between F1000 Article Factor (FFa scores) and citations remain unexplored. By linking recommendations for 28,254 publications in F1000 with citations in Scopus, we investigated the effect of research level (basic, clinical, mixed) and article type on the internal consistency of assessments based on citations and FFa scores. The research level has little impact on the differences between the 2 evaluation tools, while article type has a big effect. These 2 measures differ significantly for 2 groups: (a) nonprimary research or evidence-based research are more highly cited but not highly recommended, while (b) translational research or transformative research are more highly recommended but have fewer citations. This can be expected, since citation activity is usually practiced by academic authors while the potential for scientific revolutions and the suitability for clinical practice of an article should be investigated from a practitioners' perspective. We conclude with a recommendation that the application of bibliometric approaches in research evaluation should consider the proportion of 3 types of publications: evidence-based research, transformative research, and translational research. The latter 2 types are more suitable for assessment through peer review.
IntroductionMany stakeholders are concerned with how to properly assess the true impact of biomedical research. Investigators and research institutions often assess impact through such simple measures as publications in peer-reviewed journals, the impact factors of those journals, success in acquiring research grants, and the awarding of patents for novel inventions (Dembe, Lynch, Gugiu, & Jackson, 2014). The San Francisco Declaration on Research Assessment (DORA), initiated by the American Society for Cell Biology (ASCB), together with a group of editors and publishers of scholarly journals, recognizes the need to improve the methods applied to evaluate the outputs of scientific research. The general recommendation is "Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist's contributions, or in hiring, promotion, or funding decisions" (Way & Ahmad, 2013, p. 1903.Citation analysis, as one of the key methodologies in bibliometrics, has become an important tool for research performance assessment in the biomedical sciences (Du & Tang, 2013;Patel et al., 2011;Walker, Sykes, Hemmelgarn, & Quan, 2010). However, before bibliometric evaluation became widely accepted, peer review was the main tool used for research evaluation. This traditional approach received a new breath of life when the Faculty of 1000 Biology (F1000 Biology) was launched in 2002 to evaluate the quality of biomedical literature through a post-publication peer review system. services were combined in 2009 to form F1000Prime, which has built a peer-nominated globa...