ObjectiveProspective registration has been widely implemented and accepted as a best practice in clinical research, but retrospective registration is still commonly found. We assessed to what extent retrospective registration is reported transparently in journal publications and investigated factors associated with transparent reporting.DesignWe used a dataset of trials registered in ClinicalTrials.gov or Deutsches Register Klinischer Studien, with a German University Medical Center as the lead centre, completed in 2009–2017, and with a corresponding peer-reviewed results publication. We extracted all registration statements from results publications of retrospectively registered trials and assessed whether they mention or justify the retrospective registration. We analysed associations of retrospective registration and reporting thereof with registration number reporting, International Committee of Medical Journal Editors (ICMJE) membership/-following and industry sponsorship using χ2or Fisher exact test.ResultsIn the dataset of 1927 trials with a corresponding results publication, 956 (53.7%) were retrospectively registered. Of those, 2.2% (21) explicitly report the retrospective registration in the abstract and 3.5% (33) in the full text. In 2.1% (20) of publications, authors provide an explanation for the retrospective registration in the full text. Registration numbers were significantly underreported in abstracts of retrospectively registered trials compared with prospectively registered trials. Publications in ICMJE member journals did not have statistically significantly higher rates of both prospective registration and disclosure of retrospective registration, and publications in journals claiming to follow ICMJE recommendations showed statistically significantly lower rates compared with non-ICMJE-following journals. Industry sponsorship of trials was significantly associated with higher rates of prospective registration, but not with transparent registration reporting.ConclusionsContrary to ICMJE guidance, retrospective registration is disclosed and explained only in a small number of retrospectively registered studies. Disclosure of the retrospective nature of the registration would require a brief statement in the manuscript and could be easily implemented by journals.
Research ethics committees (RECs) and regulatory agencies assess whether the benefits of a proposed early-stage clinical trial outweigh the risks based on preclinical studies reported in investigator′s brochures (IBs). Recent studies have indicated that preclinical evidence presented in IBs is reported in a way that does not enable proper risk-benefit assessment. We interviewed different stakeholders (regulators, REC members, industry representatives, preclinical and clinical researchers, ethicists, and metaresearchers) about their views on measures to increase the completeness and robustness of preclinical evidence reporting in IBs. This study was preregistered (https://osf.io/nvzwy/). We used purposive sampling and invited stakeholders to participate in an online semistructured interview between March and June 2021. The themes were derived using inductive content analysis. We used a strengths, weaknesses, opportunities, and threats (SWOT) matrix to categorize our findings. Twenty-seven international stakeholders participated. The interviewees pointed to several strengths and opportunities to improve completeness and robustness, mainly more transparent and systematic justifications of the inclusion of studies. However, weaknesses and threats were mentioned that could undermine efforts to enable more thorough assessment: The interviewees stressed that current review practices are sufficient to ensure the safe conduct of first-in-human trials. They feared that changes to the IB structure or review process could overburden stakeholders and slow drug development. In principle, having more robust decision-making processes in place aligns with the interests of all stakeholders and with many current initiatives to increase the translatability of preclinical research and limit uninformative or ill-justified trials early in the development process. Further research should investigate measures that could be implemented to benefit all stakeholders.
Background With rising cost pressures on health care systems, machine-learning (ML)-based algorithms are increasingly used to predict health care costs. Despite their potential advantages, the successful implementation of these methods could be undermined by biases introduced in the design, conduct, or analysis of studies seeking to develop and/or validate ML models. The utility of such models may also be negatively affected by poor reporting of these studies. In this systematic review, we aim to evaluate the reporting quality, methodological characteristics, and risk of bias of ML-based prediction models for individual-level health care spending. Methods We will systematically search PubMed and Embase to identify studies developing, updating, or validating ML-based models to predict an individual’s health care spending for any medical condition, over any time period, and in any setting. We will exclude prediction models of aggregate-level health care spending, models used to infer causality, models using radiomics or speech parameters, models of non-clinically validated predictors (e.g., genomics), and cost-effectiveness analyses without predicting individual-level health care spending. We will extract data based on the Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies (CHARMS), previously published research, and relevant recommendations. We will assess the adherence of ML-based studies to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) statement and examine the inclusion of transparency and reproducibility indicators (e.g. statements on data sharing). To assess the risk of bias, we will apply the Prediction model Risk Of Bias Assessment Tool (PROBAST). Findings will be stratified by study design, ML methods used, population characteristics, and medical field. Discussion Our systematic review will appraise the quality, reporting, and risk of bias of ML-based models for individualized health care cost prediction. This review will provide an overview of the available models and give insights into the strengths and limitations of using ML methods for the prediction of health spending.
Objective: Prospective registration of clinical research has been widely implemented and advocated for many reasons: to detect and mitigate publication bias, selective reporting, and undisclosed changes in the determination of primary and secondary outcomes. Prospective registration allows for public scrutiny of trials, facilitates the identification of gaps in research, and supports the coordination of efforts by preventing unnecessary duplication. Retrospective registration undermines many of these reasons but is commonly found. We provide a comprehensive analysis of retrospective registration and the reporting thereof in publications, as well as factors associated with these practices. Design: For this cross-sectional study, we used a validated dataset of trials registered on ClinicalTrials.gov or DRKS, with a German University Medical Center as the lead center, completed between 2009 and 2017, and with at least one peer-reviewed results publication. We extracted all registration statements from all results publications of retrospectively registered trials, including mentions and justifications of retrospective registration. We analyzed associations between key trial variables and different registration and reporting practices. Results: In our dataset of 1927 trials with a corresponding results publication, 956 (53.7%) were retrospectively registered. Of those, 2.2% (21) explicitly report the retrospective registration in the abstract and 3.5% (33) in the full text. In 2.1% (20) of publications, authors provide a justification/explanation for the retrospective registration in the full text. Registration numbers were significantly underreported in abstracts of retrospectively registered trials (p < 0.001). Publications in ICMJE member journals had higher rates of both prospective registration and disclosure of retrospective registration, although not statistically significant. Publications in journals claiming to follow ICMJE recommendations showed lower rates compared to non-ICMJE-following journals. Conclusions: In contrast to ICMJE guidance, retrospective registration is disclosed and explained only in a small number of retrospectively registered studies. Lack of disclosure might lead readers to wrongly interpret the registration as a quality criterion that, in the case of a retrospective registration, rather describes a concern. Disclosure of the retrospective nature of the registration would require 1-2 additional sentences in the manuscript and could be easily implemented by publishers.
Background With rising cost pressures on health care systems, machine-learning (ML) based algorithms are increasingly used to predict health care costs. Despite their potential advantages, the successful implementation of these methods could be undermined by biases introduced in the design, conduct, or analysis of studies seeking to develop and/or validate ML models. The utility of such models may also be negatively affected by poor reporting of these studies. In this systematic review, we aim to evaluate the reporting quality, methodological characteristics, and risk of bias of ML-based prediction models for individual-level health care spending. Methods We will systematically search PubMed and Embase to identify studies developing, updating, or validating ML-based models to predict an individual’s health care spending for any medical condition, over any time period, and in any setting. We will exclude prediction models of aggregate-level health care spending, models used to infer causality, models using radiomics or speech parameters, models of non-clinically validated predictors (e.g. genomics), and cost-effectiveness analyses without predicting individual-level health care spending. We will extract data based on the CHARMS checklist, previously published research, and relevant recommendations. We will assess the adherence of ML-based studies to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) and examine the inclusion of transparency and reproducibility indicators (e.g. statements on data sharing). To assess the risk of bias, we will apply the Prediction model Risk Of Bias Assessment Tool (PROBAST). Findings will be stratified by study design, ML methods used, population characteristics, and medical field. Discussion Our systematic review will appraise the quality, reporting, and risk of bias of ML-based models for individualized health care cost prediction. This review will provide an overview of the available models and give insights into the strengths and limitations of using ML methods for the prediction of health spending. Trial registration: Not applicable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.