Publication bias is a type of systematic error when synthesizing evidence that cannot represent the underlying truth. Clinical studies with favorable results are more likely published and thus exaggerate the synthesized evidence in meta-analyses. The trim-and-fill method is a popular tool to detect and adjust for publication bias. Simulation studies have been performed to assess this method, but they may not fully represent realistic settings about publication bias. Based on real-world meta-analyses, this article provides practical guidelines and recommendations for using the trim-and-fill method. We used a worked illustrative example to demonstrate the idea of the trim-and-fill method, and we reviewed three estimators ( R 0 , L 0 , and Q 0 ) for imputing missing studies. A resampling method was proposed to calculate P values for all 3 estimators. We also summarized available meta-analysis software programs for implementing the trim-and-fill method. Moreover, we applied the method to 29,932 meta-analyses from the Cochrane Database of Systematic Reviews , and empirically evaluated its overall performance. We carefully explored potential issues occurred in our analysis. The estimators L 0 and Q 0 detected at least one missing study in more meta-analyses than R 0 , while Q 0 often imputed more missing studies than L 0 . After adding imputed missing studies, the significance of heterogeneity and overall effect sizes changed in many meta-analyses. All estimators generally converged fast. However, L 0 and Q 0 failed to converge in a few meta-analyses that contained studies with identical effect sizes. Also, P values produced by different estimators could yield different conclusions of publication bias significance. Outliers and the pre-specified direction of missing studies could have influential impact on the trim-and-fill results. Meta-analysts are recommended to perform the trim-and-fill method with great caution when using meta-analysis software programs. Some default settings (e.g., the choice of estimators and the direction of missing studies) in the programs may not be optimal for a certain meta-analysis; they should be determined on a case-by-case basis. Sensitivity analyses are encouraged to examine effects of different estimators and outlying studies. Also, the trim-and-fill estimator should be routinely reported in meta-analyses, because the results depend highly on it.
Publication bias, more generally termed as small-study effect, is a major threat to the validity of meta-analyses. Most meta-analysts rely on the p values from statistical tests to make a binary decision about the presence or absence of small-study effects. Measures are available to quantify small-study effects’ magnitude, but the current literature lacks clear rules to help evidence users in judging whether such effects are minimal or substantial. This article aims to provide rules of thumb for interpreting the measures. We use six measures to evaluate small-study effects in 29 932 meta-analyses from the Cochrane Database of Systematic Reviews. They include Egger’s regression intercept and the skewness under both the fixed-effect and random-effects settings, the proportion of suppressed studies, and the relative change of the estimated overall result due to small-study effects. The cut-offs for different extents of small-study effects are determined based on the quantiles in these distributions. We present the empirical distributions of the six measures and propose a rough guide to interpret the measures’ magnitude. The proposed rules of thumb may help evidence users grade the certainty in evidence as impacted by small-study effects.
Publication bias threatens meta‐analysis validity. It is often assessed via the funnel plot; an asymmetric plot implies small‐study effects, and publication bias is one cause of the asymmetry. Egger's regression test is a widely used tool to quantitatively assess such asymmetry. It examines the association between the observed effect sizes and their sample SEs; a strong association indicates small‐study effects. However, its false positive rates may be inflated if such an association intrinsically exists even if no small‐study effects appear, particularly in meta‐analyses of odds ratios (ORs). Various alternatives are available to address this problem. They usually replace Egger's regression predictor or response with different measures; consequently, they are powerful only in specific cases. We propose a Bayesian approach to assessing small‐study effects in meta‐analyses of ORs. It controls false positive rates by using latent “true” SEs, rather than sample SEs, in the Egger‐type regression to avoid the intrinsic association between ORs and their SEs. Although “true” SEs are unknown in practice, they can be modeled under the Bayesian framework. We use simulated and real data to compare various methods. When ORs are away from 1, the proposed method may have high powers with controlled false positive rates, while Egger's test has seriously inflated false positive rates; nevertheless, in other situations, some other methods may be superior. In general, the proposed method may serve as an alternative to rule out potential confounding effects caused by the intrinsic association between ORs and their SEs in the assessment of small‐study effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.