2011
DOI: 10.1136/bmj.d4002
|View full text |Cite
|
Sign up to set email alerts
|

Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials

Abstract: Funnel plots, and tests for funnel plot asymmetry, have been widely used to examine bias in the results of meta-analyses. Funnel plot asymmetry should not be equated with publication bias, because it has a number of other possible causes. This article describes how to interpret funnel plot asymmetry, recommends appropriate tests, and explains the implications for choice of meta-analysis model This article recommends how to examine and interpret funnel plot asymmetry (also known as small study effects 2 ) in m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

17
3,758
4
22

Year Published

2013
2013
2017
2017

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 5,526 publications
(3,988 citation statements)
references
References 44 publications
17
3,758
4
22
Order By: Relevance
“…For subgroup analysis, we tested for interaction using a χ 2 significance test 21. We planned to examine publication bias using funnel plots for outcomes for which data from 10 or more studies were available 22. Data were analysed with STATA software (version 14.2, TX, USA).…”
Section: Methodsmentioning
confidence: 99%
“…For subgroup analysis, we tested for interaction using a χ 2 significance test 21. We planned to examine publication bias using funnel plots for outcomes for which data from 10 or more studies were available 22. Data were analysed with STATA software (version 14.2, TX, USA).…”
Section: Methodsmentioning
confidence: 99%
“…Potential publication bias was examined for the primary end point by constructing a “funnel plot” in which the SE of the log RR was plotted against the RR. The asymmetry of the plot was estimated both visually and by a linear regression approach 9. The influence of each study and potential publication bias were addressed by testing whether deleting each study in turn would have changed significantly the pooled results of the meta‐analysis for the primary end point.…”
Section: Methodsmentioning
confidence: 99%
“…70 We did not quantify publication biases or selective outcome reporting biases because of the questionable statistical validity of the available tests. 71 We defined a high level of evidence on the basis of consistent findings from low risk-of-bias RCTs. We downgraded strength of evidence to moderate if at least one of the four strength-of-evidence criteria was not met and to low if two or more criteria were not met.…”
Section: Methodsmentioning
confidence: 99%