2019
DOI: 10.31219/osf.io/gyeh8
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Honest signaling in academic publishing

Abstract: Academic journals provide a key quality-control mechanism in science. Yet, information asymmetries and conflicts of interests incentivize scientists to deceive journals about the quality of their research. How can honesty be ensured, despite incentives for deception? Here, we address this question by applying the theory of honest signaling to the publication process. Our models demonstrate that several mechanisms can ensure honest journal submission, including differential benefits, differential costs, and cos… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 86 publications
(126 reference statements)
0
7
0
Order By: Relevance
“…When negative and positive results are equally valuable, scientists have no incentive to conduct large studies in order to increase the probability of positive results-a large study costs more time but generates the same payoff as an underpowered study that can quickly produce a negative result. How does this finding fit into ongoing discussions about whether researchers should publish all of their results 2,53,54 , or whether some types of publication bias 5,39,55,56 or publication restrictions 10,[57][58][59] are desirable? Our model points out that, when study quality is not sufficiently rewarded, a bias against negative results incentivizes scientists to conduct larger studies.…”
Section: Discussionmentioning
confidence: 93%
See 3 more Smart Citations
“…When negative and positive results are equally valuable, scientists have no incentive to conduct large studies in order to increase the probability of positive results-a large study costs more time but generates the same payoff as an underpowered study that can quickly produce a negative result. How does this finding fit into ongoing discussions about whether researchers should publish all of their results 2,53,54 , or whether some types of publication bias 5,39,55,56 or publication restrictions 10,[57][58][59] are desirable? Our model points out that, when study quality is not sufficiently rewarded, a bias against negative results incentivizes scientists to conduct larger studies.…”
Section: Discussionmentioning
confidence: 93%
“…Note that the mechanism by which start-up costs incentivize higher-quality research does not necessarily depend on the timing of such costs. Any costs disproportionately paid by scientists who attempt to conduct quick, low-quality research will serve the same function 10 . These might include wrap-up costs, such as long peer review times, or costs at other points in the research pipeline (for example, an obligation to peer review n other papers for each submitted paper).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The lack of specificity and resulting lack of alignment between theory and statistical model may be an instance of deceptive signaling (in the ecological sense, which doesn't imply intent to deceive), where a lack of theoretical rigor is covered up with statistical tests and a recitation of related observed correlations. This maps the generalizability crisis onto analogous problems for which models already exist as starting points, including models of signaling in collaborative environments (Smaldino, Flamson, & McElreath, 2018;Smaldino & Turner, 2020;Tiokhin et al, 2021), the evolution of scientific knowledge on networks (Zollman, 2007;2010;2013;O'Connor & Weatherall, 2018;, and the effect of prevailing social power on individual choices (Henrich & Boyd, 2008;Higginson & Munafò, 2016;Bergstrom, Foster, & Song, 2016;O'Connor, 2019). With some further development, these models could be used to conduct several "what if things are different" computational experiments under a variety of assumptions to understand what might happen if various interpersonal or institutional changes were instituted.…”
mentioning
confidence: 99%