Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
In 2011, Ploegh [1] wrote an outstanding comment in Nature entitled "End the Wasteful Tyranny of Reviewer Experiments." Ploegh eloquently explained how time and money are wasted in responding to peer review that goes beyond the task of deciding if the premise of the work is worthy of publication. Unfortunately, little has changed since then! Peer review was intended to assure that published studies were based on a sound hypothesis, valid methodology, and logical interpretation of results. However, peer review has morphed into what is often a "wish list" of experiments that the authors must successfully complete to achieve their goal of publication. Once a manuscript is revised, it is re-reviewed by the original reviewers: this is appropriate in principle as the reviewers are already familiar with the work and are in the best position to determine if the manuscript has been improved. The reviewers must be satisfied with the authors' responses. If not, 1) more work is requested or, 2) the paper is rejected. Thus, authors feel pressure to appease the reviewers. And herein lies a major problem.Although reviewer queries are an integral aspect of the peerreview process, not all reviewer queries improve manuscripts. As stated by Ploegh, the review/revision process is frequently not cost/time-efficient. A critical question remains: are manuscripts of significantly greater value after review, revision, and possibly another round of the same? In an informal survey of colleagues in the cancer field, most believe that extensive revisions (estimated to take 6-24 months) improve the value of manuscripts by only 0-20% (one author stated that extensive revisions for high-impact journals actually decreases the manuscripts' value by 10%, as the delay in publication impedes the timely dissemination of knowledge).In terms of cost-efficiency, reviewer comments seem to have become more complex and difficult to respond to, and the cost (for supplies and personnel) and time can drain laboratory resources. For example, we had a manuscript published in a high-impact journal after we persevered through 19 months of revisions; in our estimate, we spent over $220 000 (salary and experiments) to satisfy the reviewers' requests. The manuscript probably improved by 10%, and none of the principal findings changed.On many occasions, the reviewers' wish list coerces authors to present a perfect story; if one is to get over the "hump" and have the manuscript accepted, one must provide positive results when addressing reviewer queries. Elizabeth Iorns, Co-Director of the Reproducibility Initiative, points out that rarely, if ever, does an author report a negative result when responding to a query (personal communication). We believe that the current peer review process can play a role in fostering a culture of selective reporting or, worse, even falsifying or fabricating data in order to appease reviewers.
In 2011, Ploegh [1] wrote an outstanding comment in Nature entitled "End the Wasteful Tyranny of Reviewer Experiments." Ploegh eloquently explained how time and money are wasted in responding to peer review that goes beyond the task of deciding if the premise of the work is worthy of publication. Unfortunately, little has changed since then! Peer review was intended to assure that published studies were based on a sound hypothesis, valid methodology, and logical interpretation of results. However, peer review has morphed into what is often a "wish list" of experiments that the authors must successfully complete to achieve their goal of publication. Once a manuscript is revised, it is re-reviewed by the original reviewers: this is appropriate in principle as the reviewers are already familiar with the work and are in the best position to determine if the manuscript has been improved. The reviewers must be satisfied with the authors' responses. If not, 1) more work is requested or, 2) the paper is rejected. Thus, authors feel pressure to appease the reviewers. And herein lies a major problem.Although reviewer queries are an integral aspect of the peerreview process, not all reviewer queries improve manuscripts. As stated by Ploegh, the review/revision process is frequently not cost/time-efficient. A critical question remains: are manuscripts of significantly greater value after review, revision, and possibly another round of the same? In an informal survey of colleagues in the cancer field, most believe that extensive revisions (estimated to take 6-24 months) improve the value of manuscripts by only 0-20% (one author stated that extensive revisions for high-impact journals actually decreases the manuscripts' value by 10%, as the delay in publication impedes the timely dissemination of knowledge).In terms of cost-efficiency, reviewer comments seem to have become more complex and difficult to respond to, and the cost (for supplies and personnel) and time can drain laboratory resources. For example, we had a manuscript published in a high-impact journal after we persevered through 19 months of revisions; in our estimate, we spent over $220 000 (salary and experiments) to satisfy the reviewers' requests. The manuscript probably improved by 10%, and none of the principal findings changed.On many occasions, the reviewers' wish list coerces authors to present a perfect story; if one is to get over the "hump" and have the manuscript accepted, one must provide positive results when addressing reviewer queries. Elizabeth Iorns, Co-Director of the Reproducibility Initiative, points out that rarely, if ever, does an author report a negative result when responding to a query (personal communication). We believe that the current peer review process can play a role in fostering a culture of selective reporting or, worse, even falsifying or fabricating data in order to appease reviewers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.