2016
DOI: 10.1016/j.jclinepi.2015.02.013
|View full text |Cite
|
Sign up to set email alerts
|

Average effect estimates remain similar as evidence evolves from single trials to high-quality bodies of evidence: a meta-epidemiologic study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…One ME study [ 29 ] demonstrated that overall trials showed significantly much lower treatment effect estimates than that of first trial (ratio of effect size: 2.67, 95% CI: 2.12–3.37), although the remaining ME study [ 30 ] did not find such association (ratio of effect size: 1.03, 95% CI: 0.98–1.08). Several other trial-level characteristics including sufficient follow-up, placebo control and statistician involvement, among others have been investigated as well, with no significant associations being found (Additional file 10 : Appendix 10).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…One ME study [ 29 ] demonstrated that overall trials showed significantly much lower treatment effect estimates than that of first trial (ratio of effect size: 2.67, 95% CI: 2.12–3.37), although the remaining ME study [ 30 ] did not find such association (ratio of effect size: 1.03, 95% CI: 0.98–1.08). Several other trial-level characteristics including sufficient follow-up, placebo control and statistician involvement, among others have been investigated as well, with no significant associations being found (Additional file 10 : Appendix 10).…”
Section: Resultsmentioning
confidence: 99%
“…A tentative explanation for the differences between these subgroups is that trials on complementary medicine had a higher probability of suffering from methodological flaws [ 31 ]; iii) larger treatment effect estimates in first trial as compared with subsequent trial were consistently observed, regardless of the trial size, risk of bias or effect size of the first trial for continuous outcomes, indicating the robustness of the association. However, such explorations are missing in binary outcomes, although inconsistencies were observed between the two available ME studies [ 29 , 30 ]. That invites future ME studies to address.…”
Section: Discussionmentioning
confidence: 99%
“…When we detected differences in the effect estimates or conclusions between the preprint and peerreviewed article, two investigators independently classified these changes. We used the typology developed by Gartlehner et al (30) to classify these changes but had to adapt it because of the range of effect estimates we identified. We considered the statistical significance of the primary outcome between the preprint and peer-reviewed article as having changed when at least one of the two effect estimates had a P-value that was deemed statistically significant in either the preprint or publication, and not statistically significant in the other.…”
Section: Data Extraction and Analysismentioning
confidence: 99%
“…Reporting methods used to select harms 10) Report the methods used to prioritize harms, differentiate serious from frequent but less serious harms, and indicate interventions for which serious harms are not believed to be an issue and why.…”
Section: Unanticipated Harmsmentioning
confidence: 99%
“…The assessment and reporting of harms is often suboptimal, [1][2][3] studies are often too short to evaluate important long-term harms and have inadequate statistical power to evaluate serious but uncommon harms, 5,6 patients enrolled in research studies are frequently at lower risk for harms than those encountered in clinical practice, 7 potentially resulting in underestimation of harms, and important data on harms may be unpublished or selectively reported. [9][10][11] In 2005, AHRQ funded a series of white papers on challenges in evidence synthesis that included an article on evaluation of harms. 5 It highlighted unique challenges in finding and selecting data on harms, rating the quality of harms reporting, and synthesizing and displaying from studies reporting harms.…”
Section: Introductionmentioning
confidence: 99%