Ranking of research institutions by bibliometric methods is an improper tool for research performance evaluation, even at the level of large institutions. The problem, however, is not the ranking as such. The indicators used for ranking are often not advanced enough, and this situation is part of the broader problem of the application of insufficiently developed bibliometric indicators used by persons who do not have clear competence and experience in the field of quantitative studies of science. After a brief overview of the basic elements of bibliometric analysis, I discuss the major technical and methodological problems in the application of publication and citation data in the context of evaluation. Then I contend that the core of the problem lies not necessarily at the side of the data producer. Quite often persons responsible for research performance evaluation, for instance scientists themselves in their role as head of institutions and departments, science administrators at the government level and other policy makers show an attitude that encourages 'quick and dirty' bibliometric analyses whereas better quality is available. Finally, the necessary conditions for a successful application of advanced bibliometric indicators as support tool for peer review are discussed.
The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. The comparison focuses on the methodological choices underlying the different rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non‐English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking and a number of limitations of the ranking are pointed out.
The crown indicator is a well-known bibliometric indicator of research performance developed by our institute. The indicator aims to normalize citation counts for differences among fields. We critically examine the theoretical basis of the normalization mechanism applied in the crown indicator. We also make a comparison with an alternative normalization mechanism. The alternative mechanism turns out to have more satisfactory properties than the mechanism applied in the crown indicator. In particular, the alternative mechanism has a so-called consistency property. The mechanism applied in the crown indicator lacks this important property. As a consequence of our findings, we are currently moving towards a new crown indicator, which relies on the alternative normalization mechanism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.