Between 1993 and 1996, a total of 452 patients were entered into a randomized trial evaluating eliprodil (a non-competitive NMDA receptor antagonist) in patients suffering from severe head injury. The primary efficacy analysis concerned the Glasgow Outcome Score (GOS), six months after randomization. This outcome was classified into three ordered categories: good recovery; moderate disability, and the worst category made up by combining severe disability, vegetative state and dead. A sample size calculation was performed prior to the commencement of the study, using a formula which depends on the anticipated proportions of patients in the three different outcome categories, the proportional odds assumption and on the relationship between outcome and prognostic factors such as Glasgow Coma Score at entry. Owing to uncertainty about the influence of prognostic factors, and about the proportion of patients in the three GOS categories, a blinded sample size review was planned. This review was performed on the basis of the first 93 patients to respond, and this led to an increase in the sample size from 400 to 450. In this paper the pre-trial simulations showing that the type I error rate would be influenced and the power would be preserved will be presented, and the implementation of the procedure will be described.
In a sequential clinical trial, accrual of data on patients often continues after the stopping criterion for the study has been met. This is termed "overrunning." Overrunning occurs mainly when the primary response from each patient is measured after some extended observation period. The objective of this article is to compare two methods of allowing for overrunning. In particular, simulation studies are reported that assess the two procedures in terms of how well they maintain the intended type I error rate. The effect on power resulting from the incorporation of "overrunning data" using the two procedures is evaluated.
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
When sequential clinical trials are conducted by plotting a statistic measuring treatment di erence against another measuring information, power is guaranteed regardless of nuisance parameters. However, values need to be assigned to nuisance parameters in order to gain an impression of the sample size distribution. Each interim analysis provides an opportunity to re-evaluate the relationship between sample size and information. In this paper we discuss such mid-trial design reviews. In the special cases of trials with a relatively short recruitment phase followed by a longer period of follow-up, and of normally distributed responses, midtrial design reviews are particularly important. Examples are given of the various situations considered, and extensive simulations are reported demonstrating the validity of the review procedure in the case of normally distributed responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.