This cross-sectional study compares the author and journal characteristics of retracted articles on COVID-19 with retracted articles from other topics.
ObjectiveThis study examined the extent to which trials presented at major international medical conferences in 2016 consistently reported their study design, end points and results across conference abstracts, published article abstracts and press releases.DesignCross-sectional analysis of clinical trials presented at 12 major medical conferences in the USA in 2016. Conferences were identified from a list of the largest clinical research meetings aggregated by the Healthcare Convention and Exhibitors Association and were included if their abstracts were publicly available. From these conferences, all late-breaker clinical trials were included, as well as a random selection of all other clinical trials, such that the total sample included up to 25 trial abstracts per conference.Main outcome measuresFirst, it was determined if trials were registered and reported results in an International Committee of Medical Journal Editors-approved clinical trial registry. Second, it was determined if trial results were published in a peer-reviewed journal. Finally, information on trial media coverage and press releases was collected using LexisNexis. For all published trials, the consistency of reporting of the following characteristics was examined, through comparison of the trials’ conference and publication abstracts: primary efficacy endpoint definition, safety endpoint identification, sample size, follow-up period, primary end point effect size and characterisation of trial results. For all published abstracts with press releases, the characterisation of trial results across conference abstracts, press releases and publications was compared. Authors determined consistency of reporting when identical information was presented across abstracts and press releases. Primary analyses were descriptive; secondary analyses included χ2tests and multiple logistic regression.ResultsAmong 240 clinical trials presented at 12 major medical conferences, 208 (86.7%) were registered, 95 (39.6%) reported summary results in a registry and 177 (73.8%) were published; 82 (34.2%) were covered by the media and 68 (28.3%) had press releases. Among the 177 published trials, 171 (96.6%) reported the definition of primary efficacy endpoints consistently across conference and publication abstracts, whereas 96/128 (75.0%) consistently identified safety endpoints. There were 107/172 (62.2%) trials with consistent sample sizes across conference and publication abstracts, 101/137 (73.7%) that reported their follow-up periods consistently, 92/175 (52.6%) that described their effect sizes consistently and 157/175 (89.7%) that characterised their results consistently. Among the trials that were published and had press releases, 32/32 (100%) characterised their results consistently across conference abstracts, press releases and publication abstracts. No trial characteristics were associated with reporting primary efficacy end points consistently.ConclusionsFor clinical trials presented at major medical conferences, primary efficacy endpoint definitions were consistently reported and results were consistently characterised across conference abstracts, registry entries and publication abstracts; consistency rates were lower for sample sizes, follow-up periods, and effect size estimates.RegistrationThis study was registered at the Open Science Framework (https://doi.org/10.17605/OSF.IO/VGXZY).
The impact and effectiveness of clinical trial data sharing initiatives may differ depending on the data sharing model used. We characterized outcomes associated with models previously used by the U.S. National Institutes of Health (NIH): National Heart, Lung, and Blood Institute’s (NHLBI) centralized model and National Cancer Institute’s (NCI) decentralized model. We identified trials completed in 2010–2013 that met NIH data sharing criteria and matched studies based on cost and/or size, determining whether trial data were shared, and for those that were, the frequency of secondary internal publications (authored by at least one author from the original research team) and shared data publications (authored by a team external to the original research team). We matched 77 NHLBI-funded trials to 77 NCI-funded trials; among these, 20 NHLBI-sponsored trials (26%) and 4 NCI-sponsored trials (5%) shared data (OR 6.4, 95% CI: 2.1, 19.8). From the 4 NCI-sponsored trials sharing data, we identified 65 secondary internal and 2 shared data publications. From the 20 NHLBI-sponsored trials sharing data, we identified 188 secondary internal and 53 shared data publications. The NHLBI’s centralized data sharing model was associated with more trials sharing data and more shared data publications when compared with the NCI’s decentralized model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.