2012
DOI: 10.1503/cjs.023410
|View full text |Cite
|
Sign up to set email alerts
|

Meta-analytic comparison of randomized and nonrandomized studies of breast cancer surgery

Abstract: Meta-analytic comparison of randomized and nonrandomized studies of breast cancer surgeryBackground: Randomized controlled trials (RCTs) are thought to provide the most accurate estimation of "true" treatment effect. The relative quality of effect estimates derived from nonrandomized studies (nRCTs) remains unclear, particularly in surgery, where the obstacles to performing high-quality RCTs are compounded. We performed a meta-analysis of effect estimates of RCTs comparing surgical procedures for breast cancer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
4
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 46 publications
2
4
0
Order By: Relevance
“…Shikata et al performed a similar comparison between research design, but restricted the comparison to 18 digestive surgery topics and found a significant difference between different study designs (observational vs RCT) in 25% of the primary outcomes assessed [ 20 ]. Similar findings were presented more recently by Edwards et al for breast cancer surgery; they found differences in study results between designs in 2 out of 10 topics [ 21 ]. The overall conclusion, based on theoretical rather than empirical considerations, seems to be that RCTs have superior validity (notably due to the randomization), yet in certain situations the different designs may yield actually quite similar results.…”
Section: Comparison Of Study Designs For Intervention Studiessupporting
confidence: 88%
“…Shikata et al performed a similar comparison between research design, but restricted the comparison to 18 digestive surgery topics and found a significant difference between different study designs (observational vs RCT) in 25% of the primary outcomes assessed [ 20 ]. Similar findings were presented more recently by Edwards et al for breast cancer surgery; they found differences in study results between designs in 2 out of 10 topics [ 21 ]. The overall conclusion, based on theoretical rather than empirical considerations, seems to be that RCTs have superior validity (notably due to the randomization), yet in certain situations the different designs may yield actually quite similar results.…”
Section: Comparison Of Study Designs For Intervention Studiessupporting
confidence: 88%
“…Results from randomized trials (9,(13)(14)(15) and observational studies (6,9,10,(13)(14)(15)(16)(17)(18)(19)(20)(21) led to the now widely accepted conclusion that ART should be initiated as soon as possible after diagnosis of HIV infection. The two most recent studies, the randomized START trial (9) and the observational HIV-CAUSAL Collaboration(10), compared the effectiveness of immediate initiation regardless of CD4 count versus deferred initiation until CD4 count dropped below 350 cells/mm 3 or acquired immunodeficiency syndrome (AIDS) was diagnosed in HIV-positive, AIDS-free, and treatment-naïve individuals with CD4 count>500 cells/mm 3 at the start of the study.…”
Section: Case Study: Initiation Of Antiretroviral Therapy In Hiv-posimentioning
confidence: 99%
“…If differences other than randomization are not explicitly taken into account, randomized-observational comparisons, as commonly undertaken in meta-analyses (4)(5)(6)(7)(8), may be hard to interpret because they generally compare "apples with oranges" rather than "apples with apples". Informative comparisons between randomized and observational estimates will often require a careful re-analysis of the data of both the randomized trials and the observational studies.…”
Section: Introductionmentioning
confidence: 99%
“…Other notable examples include the Halsted radical mastectomy, kidney decapsulation to treat hypertension, and uterine suspension. A recent comparison of randomized and nonrandomized studies in breast cancer surgery found that, depending on the metric used, in 20-40% of outcomes the effect estimates differed more than twice between both study types, calling into question the validity of evidence derived from nonrandomized comparisons [19]. Similar findings were reported by Peinemann et al [20], who performed a systematic review of methodological studies examining the effect of the study type on the reported results.…”
Section: The Current Clinical Research Landscape In Surgerymentioning
confidence: 64%