2010
DOI: 10.1038/465870a
|View full text |Cite|
|
Sign up to set email alerts
|

How to improve the use of metrics

Abstract: Since the invention of the science citation index in the 1960s, quantitative measuring of the performance of researchers has become ever more prevalent, controversial and influential. Six commentators tell Nature what changes might ensure that individuals are assessed more fairly.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…On the other side, there are also a lot of outputs that are crucial and this study does not take into account [ 20 ]. Moreover, it has been documented that publications, citations and patents are poor proxies of the outputs of S&T since to capture the essence of good science, evaluators of scientific activity should combine forces to create an open, sound and consistent system for measuring all the activities that make up academic productivity [ 13 , 20 – 25 ]. In this paper, we only consider publications and citations reported in ISI databases, and even though they are a valuable tool in policy studies addressing general issues regarding academic systems since they are objective measurements of the diffusion and impact of research, and allow us to determine the geographic origin of research and detect growth or erosion of countries´ scientific impact; they have important limitations, such as they do not take into account books, proceedings, local journals, etc., besides the count of citations does not consider misspellings [ 26 ].…”
Section: Discussionmentioning
confidence: 99%
“…On the other side, there are also a lot of outputs that are crucial and this study does not take into account [ 20 ]. Moreover, it has been documented that publications, citations and patents are poor proxies of the outputs of S&T since to capture the essence of good science, evaluators of scientific activity should combine forces to create an open, sound and consistent system for measuring all the activities that make up academic productivity [ 13 , 20 – 25 ]. In this paper, we only consider publications and citations reported in ISI databases, and even though they are a valuable tool in policy studies addressing general issues regarding academic systems since they are objective measurements of the diffusion and impact of research, and allow us to determine the geographic origin of research and detect growth or erosion of countries´ scientific impact; they have important limitations, such as they do not take into account books, proceedings, local journals, etc., besides the count of citations does not consider misspellings [ 26 ].…”
Section: Discussionmentioning
confidence: 99%
“…Quantitative measures can and should still be used, but without understanding the context of individual researchers and their fields, no metric can solely provide evidence of impact, especially across disciplines and geographic scopes. Our findings suggest that university-wide assessments are overly burdensome, and experts agree that metrics should be used to complement or support qualitative assessment, with a greater concentration on professional judgment from other colleagues in the field or the department (Bergstrom, 2010;Muller, 2018).…”
Section: Discussionmentioning
confidence: 82%
“…Unfortunately, any metric that measures human performance is subject to manipulation and corruption (Muller, 2018). Opponents of metric culture typically do not condone metrics themselves or even the use of metrics, but rather the over-reliance on metrics; they argue that metric use in the absence of expert and qualitative assessment is irresponsible, inadequate, and even dangerous (Bergstrom, 2010;Edwards & Roy, 2016;Moustafa, 2016;Muller, 2018).…”
Section: Interpretations and Limitations Of Citation Impact Indicatorsmentioning
confidence: 99%
“…As Campbell (1979) warned, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor” (p. 85). Impact factor measurement not only restricts assessment of the quality of academic contributions to limited metrics, but it also “lures scientists to pursue high ranking first and good science second” (Bergstrom, 2010, p. 870). Also, a strong incentive is established for scientists, who under career-critical pressure to perform, to game the metrics that are meant to measure the quality of their work.…”
Section: Meta-scientific Issues In Judging Scholarly Workmentioning
confidence: 99%