In random-effects meta-analysis the between-study variance (τ 2 ) has a key role in assessing heterogeneity of study-level estimates and combining them to estimate an overall effect. For odds ratios the most common methods suffer from bias in estimating τ 2 and the overall effect and produce confidence intervals with below-nominal coverage. An improved approximation to the moments of Cochran's Q statistic, suggested by Kulinskaya and Dollinger (KD), yields new point and interval estimators of τ 2 and of the overall log-odds-ratio. Another, simpler approach (SSW) uses weights based only on study-level sample sizes to estimate the overall effect. In extensive simulations we compare our proposed estimators with established point and interval estimators for τ 2 and point and interval estimators for the overall log-odds-ratio (including the Hartung-Knapp-Sidik-Jonkman interval). Additional simulations included three estimators based on generalized linear mixed models and the Mantel-Haenszel fixedeffect estimator. Results of our simulations show that no single point estimator of τ 2 can be recommended exclusively, but Mandel-Paule and KD provide better choices for small and large numbers of studies, respectively. The KD estimator provides reliable coverage of τ 2 . Inverse-variance-weighted estimators of the overall effect are substantially biased, as are the Mantel-Haenszel odds ratio and the estimators from the generalized linear mixed models. The SSW estimator of the overall effect and a related confidence interval provide reliable point and interval estimation of the overall log-odds-ratio.
Methods for random‐effects meta‐analysis require an estimate of the between‐study variance, τ 2. The performance of estimators of τ 2 (measured by bias and coverage) affects their usefulness in assessing heterogeneity of study‐level effects and also the performance of related estimators of the overall effect. However, as we show, the performance of the methods varies widely among effect measures. For the effect measures mean difference (MD) and standardized MD (SMD), we use improved effect‐measure‐specific approximations to the expected value of Q for both MD and SMD to introduce two new methods of point estimation of τ 2 for MD (Welch‐type and corrected DerSimonian‐Laird) and one WT interval method. We also introduce one point estimator and one interval estimator for τ 2 in SMD. Extensive simulations compare our methods with four point estimators of τ 2 (the popular methods of DerSimonian‐Laird, restricted maximum likelihood, and Mandel and Paule, and the less‐familiar method of Jackson) and four interval estimators for τ 2 (profile likelihood, Q‐profile, Biggerstaff and Jackson, and Jackson). We also study related point and interval estimators of the overall effect, including an estimator whose weights use only study‐level sample sizes. We provide measure‐specific recommendations from our comprehensive simulation study and discuss an example.
BackgroundSystematic reviews and meta-analyses of binary outcomes are widespread in all areas of application. The odds ratio, in particular, is by far the most popular effect measure. However, the standard meta-analysis of odds ratios using a random-effects model has a number of potential problems. An attractive alternative approach for the meta-analysis of binary outcomes uses a class of generalized linear mixed models (GLMMs). GLMMs are believed to overcome the problems of the standard random-effects model because they use a correct binomial-normal likelihood. However, this belief is based on theoretical considerations, and no sufficient simulations have assessed the performance of GLMMs in meta-analysis. This gap may be due to the computational complexity of these models and the resulting considerable time requirements.MethodsThe present study is the first to provide extensive simulations on the performance of four GLMM methods (models with fixed and random study effects and two conditional methods) for meta-analysis of odds ratios in comparison to the standard random effects model.ResultsIn our simulations, the hypergeometric-normal model provided less biased estimation of the heterogeneity variance than the standard random-effects meta-analysis using the restricted maximum likelihood (REML) estimation when the data were sparse, but the REML method performed similarly for the point estimation of the odds ratio, and better for the interval estimation.ConclusionsIt is difficult to recommend the use of GLMMs in the practice of meta-analysis. The problem of finding uniformly good methods of the meta-analysis for binary outcomes is still open.Electronic supplementary materialThe online version of this article (10.1186/s12874-018-0531-9) contains supplementary material, which is available to authorized users.
For meta‐analysis of studies that report outcomes as binomial proportions, the most popular measure of effect is the odds ratio (OR), usually analyzed as log(OR). Many meta‐analyses use the risk ratio (RR) and its logarithm because of its simpler interpretation. Although log(OR) and log(RR) are both unbounded, use of log(RR) must ensure that estimates are compatible with study‐level event rates in the interval (0, 1). These complications pose a particular challenge for random‐effects models, both in applications and in generating data for simulations. As background, we review the conventional random‐effects model and then binomial generalized linear mixed models (GLMMs) with the logit link function, which do not have these complications. We then focus on log‐binomial models and explore implications of using them; theoretical calculations and simulation show evidence of biases. The main competitors to the binomial GLMMs use the beta‐binomial (BB) distribution, either in BB regression or by maximizing a BB likelihood; a simulation produces mixed results. Two examples and an examination of Cochrane meta‐analyses that used RR suggest bias in the results from the conventional inverse‐variance–weighted approach. Finally, we comment on other measures of effect that have range restrictions, including risk difference, and outline further research.
Contemporary statistical publications rely on simulation to evaluate performance of new methods and compare them with established methods. In the context of random-effects meta-analysis of log-odds-ratios, we investigate how choices in generating data affect such conclusions. The choices we study include the overall log-odds-ratio, the distribution of probabilities in the control arm, and the distribution of study-level sample sizes. We retain the customary normal distribution of study-level effects. To examine the impact of the components of simulations, we assess the performance of the best available inverse–variance–weighted two-stage method, a two-stage method with constant sample-size-based weights, and two generalized linear mixed models. The results show no important differences between fixed and random sample sizes. In contrast, we found differences among data-generation models in estimation of heterogeneity variance and overall log-odds-ratio. This sensitivity to design poses challenges for use of simulation in choosing methods of meta-analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.