2020
DOI: 10.1093/reseval/rvaa032
|View full text |Cite
|
Sign up to set email alerts
|

The role of metrics in peer assessments

Abstract: Metrics on scientific publications and their citations are easily accessible and are often referred to in assessments of research and researchers. This paper addresses whether metrics are considered a legitimate and integral part of such assessments. Based on an extensive questionnaire survey in three countries, the opinions of researchers are analysed. We provide comparisons across academic fields (cardiology, economics, and physics) and contexts for assessing research (identifying the best research in their … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(14 citation statements)
references
References 46 publications
3
11
0
Order By: Relevance
“…These field differences are generally consistent with the literature, which finds that evaluation criteria differ between disciplines (e.g. Guetzkow, Lamont and Mallard, 2004;Hamann and Beljean, 2017;Hammarfelt, 2020;Hug et al, 2013;Langfeldt, Reymert and Aksnes, 2020;Reymert, Jungblut and Borlaug, 2020). The field differences between the consensus-close classes would offer a convenient explanation for the difference between the core and broad consensus, as the rating patterns of the consensus-close classes are almost identical to those of the consensus classes.…”
Section: Discussionsupporting
confidence: 86%
See 2 more Smart Citations
“…These field differences are generally consistent with the literature, which finds that evaluation criteria differ between disciplines (e.g. Guetzkow, Lamont and Mallard, 2004;Hamann and Beljean, 2017;Hammarfelt, 2020;Hug et al, 2013;Langfeldt, Reymert and Aksnes, 2020;Reymert, Jungblut and Borlaug, 2020). The field differences between the consensus-close classes would offer a convenient explanation for the difference between the core and broad consensus, as the rating patterns of the consensus-close classes are almost identical to those of the consensus classes.…”
Section: Discussionsupporting
confidence: 86%
“…Specifically, this means that criteria related to knowledge gaps, feasibility, rigour, comprehensibility and argumentation, academic relevance, as well as to the competence and experience of the applicant should have a stronger influence than other criteria, such as originality and other contribution-related criteria. This is consistent with the notion of conservatism, which holds that innovative and ground-breaking research is undervalued in peer review (Lee et al, 2013), while other aspects, such as methodology (Lee, 2015) and feasibility (Langfeldt et al, 2020), are overweighted.…”
Section: Discussionsupporting
confidence: 81%
See 1 more Smart Citation
“…These metrics form the basis of criteria that have been widely used to measure institutional reputation, as well as that of authors and research groups. By relying mostly on citations (Langfeldt et al, 2021 ), however, they are inherently flawed in that they provide only a limited picture of scholarly production. Indeed, citations only count document use within scholarly works and thus provide a very limited view of the use and impact of an article.…”
Section: Introductionmentioning
confidence: 99%
“…Submitted to SI on "Quality and Quantity in Research Assessment: Examining the Merits of Metrics" in Frontiers in Research Metrics and Analytics Those metrics and analyses are based around citations (Langfeldt et al, 2021), but citations only count document use within scholarly works and thus provide a very limited view of the use and impact of an article. Those reveal only the superficial dimensions of a research's impact on society.…”
Section: Introductionmentioning
confidence: 99%