In the last few years, many new bibliometric rankings or indices have been proposed for comparing the output of scientific researchers. We propose a formal framework in which rankings can be axiomatically characterized. We then present a characterization of some popular rankings. We argue that such analyses can help the user of a ranking to choose one that is adequate in the context where she/he is working
1 We wish to thank Jose Figueira and Marc Pirlot for their helpful comments on an earlier draft of this text. Our greatest debt is to Salvatore Greco, Benedetto Matarazzo and Roman S lowiński who alerted us on the relation between our results on noncompensatory sorting models and the results in S lowiński et al. (2002) 3 Ghent University, Department of Data Analysis, H. Dunantlaan 1, B-9000 Gent, Belgium, tel: +32 9 264 63 73, fax: +32 9 264 64 87, e-mail: thierry.marchant@UGent.be.
AbstractIn the literature on MCDM, many methods have been proposed in order to sort alternatives evaluated on several attributes into ordered categories. Most of them were proposed on an ad hoc basis. The purpose of this paper is to contribute to a recent trend of research aiming at giving these methods sound theoretical foundations. Using tools from conjoint measurement, we provide an axiomatic analysis of the partitions of alternatives into two categories that can be obtained using what we call "noncompensatory sorting models". These models have strong links with the pessimistic version of ELECTRE TRI. Our analysis allows to pinpoint what appears to be the main distinctive features of ELECTRE TRI when compared to other sorting methods. It also gives hints on the various methods that have been proposed to assess the parameters of ELECTRE TRI on the basis of assignment examples.
The standard data that we use when computing bibliometric rankings of scientists are just their publication/citation records, i.e., so many papers with 0 citation, so many with 1 citation, so many with 2 citations, etc. The standard data for bibliometric rankings of departments have the same structure. It is therefore tempting (and many authors gave in to temptation) to use the same method for computing rankings of scientists and rankings of departments. Depending on the method, this can yield quite surprising and unpleasant results. Indeed, with some methods, it may happen that the "best" department contains the "worst" scientists, and only them. This problem will not occur if the rankings satisfy a property called consistency, recently introduced in the literature. In this paper, we explore the consequences of consistency and we characterize two families of consistent rankings.
Scoring rules (or score-based rankings or summation-based rankings) form a family of bibliometric rankings of authors such that authors are ranked according to the sum over all their publications of some partial scores. Many of these rankings are widely used (e.g., number of publications, weighted or not by the impact factor, by the number of authors or by the number of citations,). We present an axiomatic analysis of the family of all scoring rules and of some particular cases within this family.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.