Objective To investigate the risk of pancreatitis associated with the use of incretin-based treatments in patients with type 2 diabetes mellitus.Design Systematic review and meta-analysis.Data sources Medline, Embase, the Cochrane Central Register of Controlled Trials (CENTRAL), and ClinicalTrials.gov.Eligibility criteria Randomised and non-randomised controlled clinical trials, prospective or retrospective cohort studies, and case-control studies of treatment with glucagon-like peptide-1 (GLP-1) receptor agonists or dipeptidyl peptidase-4 (DPP-4) inhibitors in adults with type 2 diabetes mellitus compared with placebo, lifestyle modification, or active anti-diabetic drugs.Data collection and analysis Pairs of trained reviewers independently screened for eligible studies, assessed risk of bias, and extracted data. A modified Cochrane tool for randomised controlled trials and a modified version of the Newcastle-Ottawa scale for observational studies were used to assess bias. We pooled data from randomised controlled trials using Peto odds ratios, and conducted four prespecified subgroup analyses and a post hoc subgroup analysis. Because of variation in outcome measures and forms of data, we describe the results of observational studies without a pooled analysis.Results 60 studies (n=353 639), consisting of 55 randomised controlled trials (n=33 350) and five observational studies (three retrospective cohort studies, and two case-control studies; n=320 289) were included. Pooled estimates of 55 randomised controlled trials (at low or moderate
ObjeCtivesTo examine the association between dipeptidyl peptidase-4 (DPP-4) inhibitors and the risk of heart failure or hospital admission for heart failure in patients with type 2 diabetes. DesignSystematic review and meta-analysis of randomised and observational studies. Data sOurCes
Well-conducted randomized clinical trials (RCTs) are the gold standard for evaluating the safety and efficacy of medical therapeutics. Yet most often, a single group of individuals who conducted the trial are the only ones who have access to the raw data, conduct the analysis, and publish the study results. This limited access does not typically allow others to replicate the trial findings. Given the time and expense required to conduct an RCT, it is often unlikely that others will independently repeat a similar experiment. Thus, the scientific community and the public often accept the results produced and published by the original research team without an opportunity for reanalysis. Increasingly, however, opinions and empirical data are challenging the assumption that the analysis of a clinical trial is straightforward and that analysis by any other group would obtain the same results. [1][2][3] In this issue of JAMA, Ebrahim et al 4 report their findings based on a rigorous search of previously published reanalyses of RCTs. Their first surprising and discomforting finding was just how infrequently data reanalysis has occurred in medical research. Searching the literature from 1966 to present, the authors found only 37 reports that met their criteria as an RCT reanalysis. Of these few reanalyses performed, the majority (84%) had overlapping authors from the original report. Thus, reanalyses are not only rare, but the majority that were reported were not fully independent of the original research group. Despite this overlap, Ebrahim et al report that about half of the reanalyses differed in statistical or analytic approaches, a third differed in the definitions or measurements of outcomes, and most important, a third led to interpretations and conclusions different than those in the original article. While the definition of what constituted different trial analyses, study end points, findings, and interpretations is subjective, the authors' general conclusions were consistent with an emerging literature that indicates RCT reanalysis can yield different results and conclusions from those originally published. Even when the original investigators are presenting evidence in different venues it is not always consistent. For example, there is evidence from trials that data presented to the US Food and Drug Administration (FDA) may differ in important ways from those originally presented at scientific sessions or published in medical journals. Rising et al 5 assessedclinical trial information provided to the FDA and reported a 9% discordance between the conclusions in the report to the FDA and in the published article. Not unexpectedly, all were in the direction favoring the drug.Another example is discordance between what is reported in ClinicalTrials.gov and what is published in journal articles. Hartung et al 2 showed that in a random sample of phase 3 and 4 trials, in 15% the primary end point in the main article was different from the primary end point the trialists reported in ClinicalTrials.gov. Moreover, 22% re...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.