BackgroundSince pregnant women are severely underrepresented in clinical research, many take the position that the exclusion of pregnant women from research must be justified unless there are compelling “scientific reasons” for their exclusion. However, it is questionable whether this approach renders research with pregnant women fair. This paper analyzes and evaluates when research with pregnant women can be considered as fair and what constitutes scientific reasons for exclusion.MethodsConceptual ethical and methodological analysis and evaluation of fair inclusion.ResultsFair inclusion of pregnant women means (1) that pregnant women who are eligible are not excluded solely for being pregnant and (2) that the research interests of pregnant women are prioritized, meaning that they ought to receive substantially more attention. Fairness does not imply that pregnant women should be included in virtually every research project, as including only a few pregnant women in a population consisting only of women will not help to determine the effectiveness and safety of a treatment in pregnant women. Separate trials in pregnant women may be preferable once we assume, or know, that effects of interventions in pregnant women differ from the effects in other subpopulations, or when we assume, or know, that there are no differences. In the latter case, it may be preferable to conduct post-marketing studies or establish registries. If there is no conclusive evidence indicating either differences or equivalence of effects between pregnant and non-pregnant women, yet it seems unlikely that major differences or exact equivalence exist, the inclusion of pregnant women should be sufficient. Depending on the research question, this boils down to representativeness in terms of the proportion of pregnant and non-pregnant women, or to oversampling pregnant women.ConclusionsFair inclusion of pregnant women in research implies that separate trials in pregnant women should be promoted. Inclusion of pregnant women has to be realized at the earliest phases of the research process. In addition to researchers and research ethics committees, scientific advisory councils, funders, drug regulatory agencies, pharmaceutical companies, journal editors and others have a joint responsibility to further develop the evidence base for drug use in pregnant women.
Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary.
BackgroundPrognostic models often show poor performance when applied to independent validation data sets. We illustrate how treatment use in a validation set can affect measures of model performance and present the uses and limitations of available analytical methods to account for this using simulated data.MethodsWe outline how the use of risk-lowering treatments in a validation set can lead to an apparent overestimation of risk by a prognostic model that was developed in a treatment-naïve cohort to make predictions of risk without treatment. Potential methods to correct for the effects of treatment use when testing or validating a prognostic model are discussed from a theoretical perspective.. Subsequently, we assess, in simulated data sets, the impact of excluding treated individuals and the use of inverse probability weighting (IPW) on the estimated model discrimination (c-index) and calibration (observed:expected ratio and calibration plots) in scenarios with different patterns and effects of treatment use.ResultsIgnoring the use of effective treatments in a validation data set leads to poorer model discrimination and calibration than would be observed in the untreated target population for the model. Excluding treated individuals provided correct estimates of model performance only when treatment was randomly allocated, although this reduced the precision of the estimates. IPW followed by exclusion of the treated individuals provided correct estimates of model performance in data sets where treatment use was either random or moderately associated with an individual's risk when the assumptions of IPW were met, but yielded incorrect estimates in the presence of non-positivity or an unobserved confounder.ConclusionsWhen validating a prognostic model developed to make predictions of risk without treatment, treatment use in the validation set can bias estimates of the performance of the model in future targeted individuals, and should not be ignored. When treatment use is random, treated individuals can be excluded from the analysis. When treatment use is non-random, IPW followed by the exclusion of treated individuals is recommended, however, this method is sensitive to violations of its assumptions.Electronic supplementary materialThe online version of this article (doi:10.1186/s12874-017-0375-8) contains supplementary material, which is available to authorized users.
Electronic health records provide a potentially valuable data source of information for developing clinical prediction models. However, missing data are common in routinely collected health data and often missingness is informative. Informative missingness can be incorporated in a clinical prediction model, for example by including a separate category of a predictor variable that has missing values. The predictive performance of such a model depends on the transportability of the missing data mechanism, which may be compromised once the model is deployed in practice and the predictive value of certain variables becomes known. Using synthetic data, this phenomenon is explained and illustrated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.