IntroductionAlogliptin is an oral antihyperglycemic agent that is a selective inhibitor of the enzyme dipeptidyl peptidase-4 (DPP-4), approved for the treatment of type 2 diabetes mellitus (T2DM). There currently exists no comparative data to support the use of alogliptin in combination with metformin (met) and sulfonylurea (SU). A decision-focused network meta-analysis (NMA) was performed to compare the relative efficacy and safety of alogliptin 25 mg once daily to other DPP-4 inhibitors as part of a triple therapy regimen for patients inadequately controlled on metformin and SU dual therapy.MethodsA systematic literature review was conducted to identify published papers of randomized controlled trials (RCTs) that compared alogliptin with other DPP-4 inhibitors (linagliptin, saxagliptin, sitagliptin, and vildagliptin) at their Summary of Product Characteristics (SmPC) recommended daily doses, added on to metformin and SU. Comprehensive comparative analysis involving frequentist meta-analysis and Bayesian NMA compared alogliptin to each DPP-4 inhibitor separately and collectively as a group. Quasi-random effect models were introduced when random effect models could not be estimated.ResultsThe review identified 2186 articles, and 94 full-text articles were assessed for eligibility. Eight RCTs contained appropriate data for inclusion in the NMA. All analyses over all trial population sets produced very similar results, and show that alogliptin 25 mg is as least as effective (as measured by change in HbA1c from baseline, but supported by other outcome measures: change in body weight and FPG from baseline) and safe (as measured by incidence of hypoglycemia and adverse events leading to study discontinuation) as all the other DPP-4 inhibitors in triple therapy.ConclusionThis decision-focused systematic review and NMA demonstrated alogliptin 25 mg daily to have similar efficacy and safety compared to other DPP-4 inhibitors, for the treatment of T2DM in adults inadequately controlled on metformin and SU. (Funded by Takeda Development Centre Americas; EXAMINE ClinicalTrials.gov number, NCT00968708).Electronic supplementary materialThe online version of this article (doi:10.1007/s13300-017-0245-8) contains supplementary material, which is available to authorized users.
Objectives: Network meta-analyses (NMA) provide estimates of comparative efficacy for multiple treatments based on an analysis of connected networks of trial comparisons. A key concern is the comparability of treatment effect estimates from different trials. Where there is both indirect and direct evidence for one or more comparisons ('loops' in the network) it is possible to evaluate empirically the 'consistency' of the network. MethOds: A variety of methods have been proposed to examine inconsistency including: (i) node-splitting where the direct and indirect estimates are compared across the network (ii) comparison to an 'inconsistency' model where estimates for each treatment comparison are allowed to be independent, (iii) inclusion of treatment by design interaction terms, (iv) investigation of residual deviance estimates for individual trial arms, and (v) investigation of mixed predictive p-values. We compare the implementation and, most importantly, the interpretation of these methods using a previously published NMA in type 2 diabetes. In this analysis HbA1c was compared across six treatments in a network of 22 studies with multiple 'loops'. Results: The methods agreed in showing the presence of inconsistency with the network. For example, the inconsistency model showed an improved fit (DIC -62.35) compared to the consistency model (DIC -60.25). The node splitting method identified statistically significant inconsistency in two treatment arcs (liraglutide 1.8mg vs placebo and liraglutide 1.8mg vs exenatide QW). cOnclusiOns: The alternative methods vary in their ability to provide an omnibus 'test' of inconsistency across the network and their ability to identify which parts of the network contain inconsistencies. We highlight that none of the methods alone can identify individual studies as being the cause of inconsistencies and argue that we need to consider the whole structure of the network and the characteristics of the studies (in terms of treatments, subjects and design) within the network. PDB17
Objectives: Notoriety bias is defined as "a selection bias in which a case has a greater chance of being reported if the subject is exposed to the studied factor known to cause, thought to cause, or likely to cause the event of interest". This study aimed to determine the existence of notoriety bias in FDA Adverse Event Reporting System (FAERS) database and estimate its impact on signal strength. Methods: Publicly available FAERS data was used for analysis. 31 drugs which had label change/safety alert issued by FDA were considered. These drugs were reviewed four quadrants before and after the safety alert for number of reports, Reporting Odds Ratio (ROR) and Proportional Reporting Ratio (PRR). Wilcoxon signed rank test was used to compare the number of reports and signal strength before and after the alert. Results: There was increased reporting for 11 drugs after the safety alert/label change by FDA. The reporting of 20 drugs decreased or remained unchanged after the safety alert/label change by FDA. Wilcoxon signed rank test showed that there is no statistically significant difference with respect to the number of reports before and after the safety alert (p: 0.330, Z: -0.974). 14 (45.16%) drugs had increase in ROR, while 17 (54.83%) drugs had decrease in ROR after safety alert issued by FDA (p: 0.953, Z: -0.059). 14 (45.16%) drugs had increase in PRR, while 17 (54.83%) drugs had decrease in PRR after safety alert (p: 0.914, Z: -0.108). Conclusions: Although few FDA safety alert/ warnings had strong and immediate impact, many had no impact on reporting of AE and signal strength. This study found that over reporting due to notoriety bias does not exist in the FAERS database and the overall disproportionality in signal estimates is not altered by safety alert.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.