2013
DOI: 10.1371/journal.pone.0071813
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Pooled Risk Estimates for Adverse Effects from Different Observational Study Designs: Methodological Overview

Abstract: BackgroundA diverse range of study designs (e.g. case-control or cohort) are used in the evaluation of adverse effects. We aimed to ascertain whether the risk estimates from meta-analyses of case-control studies differ from that of other study designs.MethodsSearches were carried out in 10 databases in addition to reference checking, contacting experts, and handsearching key journals and conference proceedings. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
19
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 17 publications
(20 citation statements)
references
References 78 publications
1
19
0
Order By: Relevance
“…37,42 Concato et al 37 justified this exclusion based on previous evidence presented in Sacks et al,39 who reported that 79% of interventions tested were considered effective in trials with historical controls, whereas only 20% were considered effective in RCTs. Further empirical evidence of the potential for bias in studies using historical controls is also presented in Ioannidis et al, 34 Algra and Rothwell 40 and Golder et al, 41 who all found that there were fewer discrepancies between the results of RCTs and NRSs when studies with historical controls were excluded. Ioannidis et al 34 also found that results from prospective NRSs contained fewer discrepancies compared with effect estimates from randomised studies than did retrospective studies, either with current or historical controls.…”
Section: Quantification Of Bias In Observational Studiesmentioning
confidence: 91%
See 3 more Smart Citations
“…37,42 Concato et al 37 justified this exclusion based on previous evidence presented in Sacks et al,39 who reported that 79% of interventions tested were considered effective in trials with historical controls, whereas only 20% were considered effective in RCTs. Further empirical evidence of the potential for bias in studies using historical controls is also presented in Ioannidis et al, 34 Algra and Rothwell 40 and Golder et al, 41 who all found that there were fewer discrepancies between the results of RCTs and NRSs when studies with historical controls were excluded. Ioannidis et al 34 also found that results from prospective NRSs contained fewer discrepancies compared with effect estimates from randomised studies than did retrospective studies, either with current or historical controls.…”
Section: Quantification Of Bias In Observational Studiesmentioning
confidence: 91%
“…[30][31][32][33][34][35][36][37][38][39][40][41][42][43] A summary of the methods and findings of each the 14 identified studies is presented in Appendix 2 (see Table 42). …”
Section: Quantification Of Bias In Observational Studiesmentioning
confidence: 99%
See 2 more Smart Citations
“…Golder et al [8] showed that systematic reviews of randomized and observational data give more similar answers than expected.…”
mentioning
confidence: 89%