Peer review practices differ substantially between journals and disciplines. This study presents the results of a survey of 322 editors of journals in ecology, economics, medicine, physics and psychology. We found that 49% of the journals surveyed checked all manuscripts for plagiarism, that 61% allowed authors to recommend both for and against specific reviewers, and that less than 6% used a form of open peer review. Most journals did not have an official policy on altering reports from reviewers, but 91% of editors identified at least one situation in which it was appropriate for an editor to alter a report. Editors were also asked for their views on five issues related to publication ethics. A majority expressed support for co-reviewing, reviewers requesting access to data, reviewers recommending citations to their work, editors publishing in their own journals, and replication studies. Our results provide a window into what is largely an opaque aspect of the scientific process. We hope the findings will inform the debate about the role and transparency of peer review in scholarly publishing.
Background Investigations of transparency, reproducibility and replicability in science have been directed largely at individual studies. It is just as critical to explore these issues in syntheses of studies, such as systematic reviews, given their influence on decision-making and future research. We aim to explore various aspects relating to the transparency, reproducibility and replicability of several components of systematic reviews with meta-analysis of the effects of health, social, behavioural and educational interventions. Methods The REPRISE (REProducibility and Replicability In Syntheses of Evidence) project consists of four studies. We will evaluate the completeness of reporting and sharing of review data, analytic code and other materials in a random sample of 300 systematic reviews of interventions published in 2020 (Study 1). We will survey authors of systematic reviews to explore their views on sharing review data, analytic code and other materials and their understanding of and opinions about replication of systematic reviews (Study 2). We will then evaluate the extent of variation in results when we (a) independently reproduce meta-analyses using the same computational steps and analytic code (if available) as used in the original review (Study 3), and (b) crowdsource teams of systematic reviewers to independently replicate a subset of methods (searches for studies, selection of studies for inclusion, collection of outcome data, and synthesis of results) in a sample of the original reviews; 30 reviews will be replicated by 1 team each and 2 reviews will be replicated by 15 teams (Study 4). Discussion The REPRISE project takes a systematic approach to determine how reliable systematic reviews of interventions are. We anticipate that results of the REPRISE project will inform strategies to improve the conduct and reporting of future systematic reviews.
Objectives To examine changes in completeness of reporting and frequency of sharing data, analytical code, and other review materials in systematic reviews over time; and factors associated with these changes. Design Cross sectional meta-research study. Population Random sample of 300 systematic reviews with meta-analysis of aggregate data on the effects of a health, social, behavioural, or educational intervention. Reviews were indexed in PubMed, Science Citation Index, Social Sciences Citation Index, Scopus, and Education Collection in November 2020. Main outcome measures The extent of complete reporting and the frequency of sharing review materials in the systematic reviews indexed in 2020 were compared with 110 systematic reviews indexed in February 2014. Associations between completeness of reporting and various factors (eg, self-reported use of reporting guidelines, journal policies on data sharing) were examined by calculating risk ratios and 95% confidence intervals. Results Several items were reported suboptimally among 300 systematic reviews from 2020, such as a registration record for the review (n=113; 38%), a full search strategy for at least one database (n=214; 71%), methods used to assess risk of bias (n=185; 62%), methods used to prepare data for meta-analysis (n=101; 34%), and source of funding for the review (n=215; 72%). Only a few items not already reported at a high frequency in 2014 were reported more frequently in 2020. No evidence indicated that reviews using a reporting guideline were more completely reported than reviews not using a guideline. Reviews published in 2020 in journals that mandated either data sharing or inclusion of data availability statements were more likely to share their review materials (eg, data, code files) than reviews in journals without such mandates (16/87 (18%) v 4/213 (2%)). Conclusion Incomplete reporting of several recommended items for systematic reviews persists, even in reviews that claim to have followed a reporting guideline. Journal policies on data sharing might encourage sharing of review materials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.