Abstract. Informative diagnostic tools are vital to the development of useful mixed-effects models. The Visual Predictive Check (VPC) is a popular tool for evaluating the performance of population PK and PKPD models. Ideally, a VPC will diagnose both the fixed and random effects in a mixed-effects model. In many cases, this can be done by comparing different percentiles of the observed data to percentiles of simulated data, generally grouped together within bins of an independent variable. However, the diagnostic value of a VPC can be hampered by binning across a large variability in dose and/or influential covariates. VPCs can also be misleading if applied to data following adaptive designs such as dose adjustments. The prediction-corrected VPC (pcVPC) offers a solution to these problems while retaining the visual interpretation of the traditional VPC. In a pcVPC, the variability coming from binning across independent variables is removed by normalizing the observed and simulated dependent variable based on the typical population prediction for the median independent variable in the bin. The principal benefit with the pcVPC has been explored by application to both simulated and real examples of PK and PKPD models. The investigated examples demonstrate that pcVPCs have an enhanced ability to diagnose model misspecification especially with respect to random effects models in a range of situations. The pcVPC was in contrast to traditional VPCs shown to be readily applicable to data from studies with a priori and/or a posteriori dose adaptations.
Abstract. The purpose of this study is to investigate the impact of observations below the limit of quantification (BQL) occurring in three distinctly different ways and assess the best method for prevention of bias in parameter estimates and for illustrating model fit using visual predictive checks (VPCs). Three typical ways in which BQL can occur in a model was investigated with simulations from three different models and different levels of the limit of quantification (LOQ). Model A was used to represent a case with BQL observations in an absorption phase of a PK model whereas model B represented a case with BQL observations in the elimination phase. The third model, C, an indirect response model illustrated a case where the variable of interest in some cases decreases below the LOQ before returning towards baseline. Different approaches for handling of BQL data were compared with estimation of the full dataset for 100 simulated datasets following models A, B, and C. An improved standard for VPCs was suggested to better evaluate simulation properties both for data above and below LOQ. Omission of BQL data was associated with substantial bias in parameter estimates for all tested models even for seemingly small amounts of censored data. Best performance was seen when the likelihood of being below LOQ was incorporated into the model. In the tested examples this method generated overall unbiased parameter estimates. Results following substitution of BQL observations with LOQ/2 were in some cases shown to introduce bias and were always suboptimal to the best method. The new standard VPCs was found to identify model misfit more clearly than VPCs of data above LOQ only.
Taking parameter uncertainty into account is key to make drug development decisions such as testing whether trial endpoints meet defined criteria. Currently used methods for assessing parameter uncertainty in NLMEM have limitations, and there is a lack of diagnostics for when these limitations occur. In this work, a method based on sampling importance resampling (SIR) is proposed, which has the advantage of being free of distributional assumptions and does not require repeated parameter estimation. To perform SIR, a high number of parameter vectors are simulated from a given proposal uncertainty distribution. Their likelihood given the true uncertainty is then approximated by the ratio between the likelihood of the data given each vector and the likelihood of each vector given the proposal distribution, called the importance ratio. Non-parametric uncertainty distributions are obtained by resampling parameter vectors according to probabilities proportional to their importance ratios. Two simulation examples and three real data examples were used to define how SIR should be performed with NLMEM and to investigate the performance of the method. The simulation examples showed that SIR was able to recover the true parameter uncertainty. The real data examples showed that parameter 95 % confidence intervals (CI) obtained with SIR, the covariance matrix, bootstrap and log-likelihood profiling were generally in agreement when 95 % CI were symmetric. For parameters showing asymmetric 95 % CI, SIR 95 % CI provided a close agreement with log-likelihood profiling but often differed from bootstrap 95 % CI which had been shown to be suboptimal for the chosen examples. This work also provides guidance towards the SIR workflow, i.e.,which proposal distribution to choose and how many parameter vectors to sample when performing SIR, using diagnostics developed for this purpose. SIR is a promising approach for assessing parameter uncertainty as it is applicable in many situations where other methods for assessing parameter uncertainty fail, such as in the presence of small datasets, highly nonlinear models or meta-analysis.Electronic supplementary materialThe online version of this article (doi:10.1007/s10928-016-9487-8) contains supplementary material, which is available to authorized users.
Quantifying the uncertainty around endpoints used for decision-making in drug development is essential. In nonlinear mixed-effects models (NLMEM) analysis, this uncertainty is derived from the uncertainty around model parameters. Different methods to assess parameter uncertainty exist, but scrutiny towards their adequacy is low. In a previous publication, sampling importance resampling (SIR) was proposed as a fast and assumption-light method for the estimation of parameter uncertainty. A non-iterative implementation of SIR proved adequate for a set of simple NLMEM, but the choice of SIR settings remained an issue. This issue was alleviated in the present work through the development of an automated, iterative SIR procedure. The new procedure was tested on 25 real data examples covering a wide range of pharmacokinetic and pharmacodynamic NLMEM featuring continuous and categorical endpoints, with up to 39 estimated parameters and varying data richness. SIR led to appropriate results after 3 iterations on average. SIR was also compared with the covariance matrix, bootstrap and stochastic simulations and estimations (SSE). SIR was about 10 times faster than the bootstrap. SIR led to relative standard errors similar to the covariance matrix and SSE. SIR parameter 95% confidence intervals also displayed similar asymmetry to SSE. In conclusion, the automated SIR procedure was successfully applied over a large variety of cases, and its user-friendly implementation in the PsN program enables an efficient estimation of parameter uncertainty in NLMEM.Electronic supplementary materialThe online version of this article (doi:10.1007/s10928-017-9542-0) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.