2017
DOI: 10.1186/s12874-017-0295-7
|View full text |Cite|
|
Sign up to set email alerts
|

Evaluation of biases present in the cohort multiple randomised controlled trial design: a simulation study

Abstract: BackgroundThe cohort multiple randomised controlled trial (cmRCT) design provides an opportunity to incorporate the benefits of randomisation within clinical practice; thus reducing costs, integrating electronic healthcare records, and improving external validity. This study aims to address a key concern of the cmRCT design: refusal to treatment is only present in the intervention arm, and this may lead to bias and reduce statistical power.MethodsWe used simulation studies to assess the effect of this refusal,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(27 citation statements)
references
References 21 publications
0
27
0
Order By: Relevance
“…Before the start of the study, a required sample size of 166 patients was estimated based on an expected acceptance rate of 70% in the intervention group, a clinically relevant 10-point difference in quality of life, a power of 80%, and an alpha of 0.05 [12]. After the recruitment of 152 patients, the actual acceptance rate was lower than expected (i.e., 55% instead of 70%) and the sample size was updated, as recommended by Candlish et al [15], to 260 patients. Noticeably, this sample size modification was not based on interim analysis of the trial outcome.…”
Section: What Is the Implication And What Should Change Now?mentioning
confidence: 99%
See 1 more Smart Citation
“…Before the start of the study, a required sample size of 166 patients was estimated based on an expected acceptance rate of 70% in the intervention group, a clinically relevant 10-point difference in quality of life, a power of 80%, and an alpha of 0.05 [12]. After the recruitment of 152 patients, the actual acceptance rate was lower than expected (i.e., 55% instead of 70%) and the sample size was updated, as recommended by Candlish et al [15], to 260 patients. Noticeably, this sample size modification was not based on interim analysis of the trial outcome.…”
Section: What Is the Implication And What Should Change Now?mentioning
confidence: 99%
“…It took 18 months to identify and randomize these 166 patients. As recommended by Candlish et al [15], we updated the sample size after the recruitment of 152 patients because the actual acceptance rate (55%) of the intervention deviated from the expected rate (70%). Twelve additional months were needed to reach the updated sample size (n 5 260).…”
Section: Recruitmentmentioning
confidence: 99%
“…After publication of the original article [1], it came to the authors’ attention that there was an error affecting the References. The published Reference 14 [2] is incorrect, and should have cited a different article by Pate et al [3].…”
Section: Erratummentioning
confidence: 99%
“…A potential problem with this approach is that a substantial number of patients offered an intervention that is undergoing testing may not accept it, since they did not enrol in the cohort with any expectation that it would be offered to them. This would dilute intervention effects estimated on an intention-to-treat basis, potentially substantially if the rate of accepted offers is low, as the intervention arm then includes a large proportion of patients receiving care as usual 23. A possible solution that has been suggested to reduce non-acceptance of intervention offers is to present cohort patients with a list of possible interventions as part of regular cohort data collection and ask if they would agree to use them if offered 4.…”
Section: Introductionmentioning
confidence: 99%