The proposed guidelines for the assessment of the effect of new pharmaceutical agents on the QT interval (beginning of QRS complex to end of T wave on the electrocardiogram) are based on the maximum of a series over time of simple one-sided 95 per cent upper confidence bounds. This procedure is typically very conservative as a procedure for obtaining a 95 per cent bound for the maximum of the population parameters. This paper proposes new bounds for the maximum, both analytical and bootstrap-based, that are lower but still achieve correct coverage in the context of crossover and parallel designs for the most realistic portions of the parameter space.
A procedure for constructing two-sided beta-content, gamma-confidence tolerance intervals is proposed for general random effects models, in both balanced and unbalanced data scenarios. The proposed intervals are based on the concept of effective sample size and modified large sample methods for constructing confidence bounds on functions of variance components. The performance of the proposed intervals is evaluated via simulation techniques. The results indicate that the proposed intervals generally maintain the nominal confidence and content levels. Application of the proposed procedure is illustrated with a one-fold nested design used to evaluate the performance of a quantitative bioanalytical method.
Various approaches are compared for the design and analysis of studies to assess the transfer of an analytical method from a research and development site to one or more other sites: comparison of observed bias and precision to acceptance limits, statistical quality control-type analysis, statistical difference tests, and statistical equivalence tests. These approaches are evaluated in terms of the extent to which the risks of incorrect decisions (consumer risk of failing to detect that a site is unacceptable, and producer risk of rejecting an acceptable site) are known and/or controlled. Comparison of observed accuracy and precision to acceptance limits is a flawed approach because both the consumer and producer risks are unknown and uncontrolled. For technology transfec where the objective is to demonstrate sufSicient acceptability or similarity, the statistical quality control and difference tests are well known to suffer from illogical characteristics (decreasing true acceptance probabilities as the sample size increases). The equivalence test is the preferred approach because it alone controls the more important consumer risk and performs in a scientifically logical manne,: Acceptance limits for accuracy and precision in the equivalence test should be based on need for intended use (ie, ensuring thar good batches will pass, and bad batches will fail, during future release testing and stability testing), and a rigorous method for selection of well-conceived limits is presented. Methods for sample size determination are also included. The proposed approach is illustrated with two examples.
Using historical studies, we compared the impact of using the average baseline or time-matched baseline on diurnal effect correction, treatment effect estimation, and analysis of variance/covariance (ANOVA/ANCOVA) efficiency in a parallel thorough QT/QTc (TQT) study. Under a multivariate normal distribution assumption, we derived conditions for achieving unbiasness and better efficiency when using the average baseline, and confirmed these conditions using historical TQT studies. Furthermore, simulations were conducted under the randomized trial with and without observed imbalanced baseline settings. We conclude that the analyses using average baseline yield better efficiency and unbiased or less biased results under our TQT study conditions.
Current ad-hoc approaches to method validation are inconsistent with ensuring method suitability. A total error approach based on the use of two-sided beta-content tolerance intervals was developed. The total error approach offers a formal statistical framework for assessing analytical method performance. The approach is consistent with the concept of method suitability and controls the risk of incorrectly accepting unsuitable analytical methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.