Pooling of direct and indirect evidence from randomized trials, known as mixed treatment comparisons (MTC), is becoming increasingly common in the clinical literature. MTC allows coherent judgements on which of the several treatments is the most effective and produces estimates of the relative effects of each treatment compared with every other treatment in a network.We introduce two methods for checking consistency of direct and indirect evidence. The first method (back-calculation) infers the contribution of indirect evidence from the direct evidence and the output of an MTC analysis and is useful when the only available data consist of pooled summaries of the pairwise contrasts. The second more general, but computationally intensive, method is based on 'node-splitting' which separates evidence on a particular comparison (node) into 'direct' and 'indirect' and can be applied to networks where trial-level data are available. Methods are illustrated with examples from the literature. We take a hierarchical Bayesian approach to MTC implemented using WinBUGS and R.We show that both methods are useful in identifying potential inconsistencies in different types of network and that they illustrate how the direct and indirect evidence combine to produce the posterior MTC estimates of relative treatment effects. This allows users to understand how MTC synthesis is pooling the data, and what is 'driving' the final estimates.We end with some considerations on the modelling assumptions being made, the problems with the extension of the back-calculation method to trial-level data and discuss our methods in the context of the existing literature.
BackgroundThe results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated.MethodsWe develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers.ResultsThe validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported.ConclusionThe algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.
We set out a generalized linear model framework for the synthesis of data from randomized controlled trials. A common model is described, taking the form of a linear regression for both fixed and random effects synthesis, which can be implemented with normal, binomial, Poisson, and multinomial data. The familiar logistic model for meta-analysis with binomial data is a generalized linear model with a logit link function, which is appropriate for probability outcomes. The same linear regression framework can be applied to continuous outcomes, rate models, competing risks, or ordered category outcomes by using other link functions, such as identity, log, complementary log-log, and probit link functions. The common core model for the linear predictor can be applied to pairwise meta-analysis, indirect comparisons, synthesis of multiarm trials, and mixed treatment comparisons, also known as network meta-analysis, without distinction. We take a Bayesian approach to estimation and provide WinBUGS program code for a Bayesian analysis using Markov chain Monte Carlo simulation. An advantage of this approach is that it is straightforward to extend to shared parameter models where different randomized controlled trials report outcomes in different formats but from a common underlying model. Use of the generalized linear model framework allows us to present a unified account of how models can be compared using the deviance information criterion and how goodness of fit can be assessed using the residual deviance. The approach is illustrated through a range of worked examples for commonly encountered evidence formats.
Standard methods for indirect comparisons and network meta-analysis are based on aggregate data, with the key assumption that there is no difference between the trials in the distribution of effect-modifying variables. Methods which relax this assumption are becoming increasingly common for submissions to reimbursement agencies, such as the National Institute for Health and Care Excellence (NICE). These methods use individual patient data from a subset of trials to form population-adjusted indirect comparisons between treatments, in a specific target population. Recently proposed population adjustment methods include the Matching-Adjusted Indirect Comparison (MAIC) and the Simulated Treatment Comparison (STC). Despite increasing popularity, MAIC and STC remain largely untested. Furthermore, there is a lack of clarity about exactly how and when they should be applied in practice, and even whether the results are relevant to the decision problem. There is therefore a real and present risk that the assumptions being made in one submission to a reimbursement agency are fundamentally different to—or even incompatible with—the assumptions being made in another for the same indication. We describe the assumptions required for population-adjusted indirect comparisons, and demonstrate how these may be used to generate comparisons in any given target population. We distinguish between anchored and unanchored comparisons according to whether a common comparator arm is used or not. Unanchored comparisons make much stronger assumptions, which are widely regarded as infeasible. We provide recommendations on how and when population adjustment methods should be used, and the supporting analyses that are required to provide statistically valid, clinically meaningful, transparent and consistent results for the purposes of health technology appraisal. Simulation studies are needed to examine the properties of population adjustment methods and their robustness to breakdown of assumptions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.