2017
DOI: 10.1002/cpt.857
|View full text |Cite
|
Sign up to set email alerts
|

When and How Can Real World Data Analyses Substitute for Randomized Controlled Trials?

Abstract: Regulators consider randomized controlled trials (RCTs) as the gold standard for evaluating the safety and effectiveness of medications, but their costs, duration, and limited generalizability have caused some to look for alternatives. Real world evidence based on data collected outside of RCTs, such as registries and longitudinal healthcare databases, can sometimes substitute for RCTs, but concerns about validity have limited their impact. Greater reliance on such real world data (RWD) in regulatory decision … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
247
0
5

Year Published

2018
2018
2023
2023

Publication Types

Select...
9

Relationship

3
6

Authors

Journals

citations
Cited by 233 publications
(253 citation statements)
references
References 70 publications
1
247
0
5
Order By: Relevance
“…All trials were required to be large and sufficiently well‐powered to make their results reliable. Note that we did not intend to produce a representative sample of trials used for regulatory decision making, but instead sought to create a sample of trials that would be representative of the types of trials that are potentially replicable in claims data, given current knowledge …”
Section: Rct Search Strategymentioning
confidence: 99%
See 1 more Smart Citation
“…All trials were required to be large and sufficiently well‐powered to make their results reliable. Note that we did not intend to produce a representative sample of trials used for regulatory decision making, but instead sought to create a sample of trials that would be representative of the types of trials that are potentially replicable in claims data, given current knowledge …”
Section: Rct Search Strategymentioning
confidence: 99%
“…A similar protocol template will be used for all replications, but specific design elements and operational (coding) definitions will be chosen on the basis of knowledge of the trial, including exposure and outcome measurement, inclusion and exclusion criteria, and definition of the follow‐up period, as well as knowledge of the therapeutic area and likely sources of confounding. Although design and analysis will vary, depending on the specific study, whenever possible, we will utilize design features that have been shown previously to be more likely to lead to valid estimates, including new user, active comparator cohort designs . When possible and recommended, the study protocol will include several alternative analyses, clearly declaring one as the primary analysis that would be recommended as most likely to be valid.…”
Section: Study Design and Implementation Processmentioning
confidence: 99%
“…[11][12][13] Potential reasons for discrepancies observed in the findings of these trials have been discussed elsewhere. [14][15][16][17] Two recently published observational studies report conflicting findings. [14][15][16][17] Two recently published observational studies report conflicting findings.…”
Section: Introductionmentioning
confidence: 99%
“…Firstly, the evidence of major RCTs on this question is quite strong, serving as a good estimate of the “true” treatment association by ACT. Secondly, resected PCa patients represent a small but homogeneous group of patients, which provides a better basis to study treatment associations as there are less influential factors that might distort a potential association 39. Thirdly, the ESPAC-3 trial revealed that survival of PCa patients does not depend on the time of initiation of ACT after surgery, which would have introduced bias had there been a causal relationship 18,19.…”
Section: Discussionmentioning
confidence: 99%