2021
DOI: 10.1038/s41562-020-01040-1
|View full text |Cite|
|
Sign up to set email alerts
|

Competition for priority harms the reliability of science, but reforms can help

Abstract: cademic science is a culturally evolved social institution with formal rules, norms and conventions. However, in recent years, scientists have begun to examine the utility of even longstanding characteristics of this institution 1-3 . For example, it is now widely recognized that preferentially valuing positive over negative results can generate publication bias, which distorts the published literature 4,5 ; evaluating scientists based on their number of publications can cause a myopic focus on productivity at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0
1

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(17 citation statements)
references
References 79 publications
(103 reference statements)
1
14
0
1
Order By: Relevance
“…And how can we pragmatically replace the current, flawed metrics of research productivity with ones that more validly reflect reliable, incremental knowledge gain? The primary goal of such an assessment scheme would be to ensure that actual research quality is sustained (or even promoted) when evaluation metrics are being maximized -both actively, when researchers strategically decide how to behave in order to further their own careers (sometimes to the extent of gaming the system), and passively, when institutions select and reward individuals who scored highest in the rankings based on these parameters (Bakker et al, 2012;Franco, Malhotra, & Simonovits, 2014;Müller & de Rijcke, 2017;Smaldino & McElreath, 2016;Tiokhin, Yan, & Morgan, 2021). This whitepaper reports the results of this task force.…”
mentioning
confidence: 99%
“…And how can we pragmatically replace the current, flawed metrics of research productivity with ones that more validly reflect reliable, incremental knowledge gain? The primary goal of such an assessment scheme would be to ensure that actual research quality is sustained (or even promoted) when evaluation metrics are being maximized -both actively, when researchers strategically decide how to behave in order to further their own careers (sometimes to the extent of gaming the system), and passively, when institutions select and reward individuals who scored highest in the rankings based on these parameters (Bakker et al, 2012;Franco, Malhotra, & Simonovits, 2014;Müller & de Rijcke, 2017;Smaldino & McElreath, 2016;Tiokhin, Yan, & Morgan, 2021). This whitepaper reports the results of this task force.…”
mentioning
confidence: 99%
“…We need to encourage a more realistic view of what constitutes valuable research. Some journals have introduced policies that protect authors against “scooping”, by offering a period of protection where manuscripts will be considered even if similar findings have recently been published elsewhere [ 5 , 6 ]. This reduces incentives to be the first to publish, which can result in a hyper-competitive atmosphere that reduces quality.…”
Section: Publishersmentioning
confidence: 99%
“…Reforms such as registered reports or pre-registration may result in improved research quality. (Tiokhin et al, 2021) 2.4 | Psychology and cognitive science…”
Section: Multi-person Conversationmentioning
confidence: 99%