OBJECTIVES: The objective was to conduct a simulation study to assess the impact of using "plausibly" vague priors for the estimation of the between-study heterogeneity parameter t2 in a Bayesian meta-analysis. METHODS: "Plausibly" vague priors refers to the selection of a prior more fitted to the data by remaining sufficiently vague to have results driven by the data while ensuring model convergence. Several data inputs scenarios were simulated allowing variations on the overall treatment effect q, the number of studies and the intensity of t2. We used a hierarchical random-effects model to conduct meta-analyses on the simulated data inputs. The analysis was performed in a Bayesian framework using a Markov chain Monte Carlo algorithm. A literature review was conducted to identify the prior distributions which were assessed. Several scenarios of the distributions' parameters were tested. A fixed effect model was also conducted as a comparison tool. For each scenario, we assessed the performance of the prior by measuring the mean absolute estimation error of the t2 estimate; the coverage probability and the length of the credibility interval of the q estimate; and the goodness-of-fit of the model. RESULTS: Thirty-two data inputs scenarios were simulated and eight different prior scenarios were compared. Overall the length of the credibility intervals for q were broader with the random-effects model than in the fixed-effect model, however the coverage probability was better with the random-effects estimates. Regarding the mean absolute estimation error of t2, the priors using a lognormal distribution were associated with very precise estimates, especially in case of a low number of studies, whereas more vague priors resulted in biased results. CONCLUSIONS: This empirical study showed that the use of a plausibly vague prior distribution for the variance parameter can enhance the estimation of metaanalyses results, especially in a sparse data context.
OBJECTIVES:Where cross trial comparisons are required, differences in patient populations have the potential to bias comparisons. Whilst methods such as propensity scoring exist to match studies where individual level data are available, until recently, no such analysis was possible where only aggregate level data were accessible for the historical study. Matching Adjusted Indirect Comparison (MAIC) attempts to address this issue by weighting patients in the contemporary study for which individual patient characteristics are available, to match aggregate characteristics with those observed in the historical study. METHODS: We conducted a simulation study with a large number of patients (10,000) on control and intervention, with outcomes determined by 12 covariates. 6 of these were in balance, with 6 on average more favourable in the intervention arm -with the intervention also assumed to have a positive effect. Various scenarios were tested, and the level of error from the 'true' difference calculated both for naïve comparisons between the simulated arms, and the error ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.