Measures of interaction on an additive scale (relative excess risk due to interaction [RERI], attributable proportion [AP], synergy index [S]), were developed for risk factors rather than preventive factors. It has been suggested that preventive factors should be recoded to risk factors before calculating these measures. We aimed to show that these measures are problematic with preventive factors prior to recoding, and to clarify the recoding method to be used to circumvent these problems. Recoding of preventive factors should be done such that the stratum with the lowest risk becomes the reference category when both factors are considered jointly (rather than one at a time). We used data from a case-control study on the interaction between ACE inhibitors and the ACE gene on incident diabetes. Use of ACE inhibitors was a preventive factor and DD ACE genotype was a risk factor. Before recoding, the RERI, AP and S showed inconsistent results (RERI = 0.26 [95%CI: −0.30; 0.82], AP = 0.30 [95%CI: −0.28; 0.88], S = 0.35 [95%CI: 0.02; 7.38]), with the first two measures suggesting positive interaction and the third negative interaction. After recoding the use of ACE inhibitors, they showed consistent results (RERI = −0.37 [95%CI: −1.23; 0.49], AP = −0.29 [95%CI: −0.98; 0.40], S = 0.43 [95%CI: 0.07; 2.60]), all indicating negative interaction. Preventive factors should not be used to calculate measures of interaction on an additive scale without recoding.
Individual participant data (IPD) meta‐analysis is an increasingly used approach for synthesizing and investigating treatment effect estimates. Over the past few years, numerous methods for conducting an IPD meta‐analysis (IPD‐MA) have been proposed, often making different assumptions and modeling choices while addressing a similar research question. We conducted a literature review to provide an overview of methods for performing an IPD‐MA using evidence from clinical trials or non‐randomized studies when investigating treatment efficacy. With this review, we aim to assist researchers in choosing the appropriate methods and provide recommendations on their implementation when planning and conducting an IPD‐MA. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
L ogistic regression analysis, which estimates odds ratios, is often used to adjust for covariables in cohort studies and randomized controlled trials (RCTs) that study a dichotomous outcome. In case-control studies, the odds ratio is the appropriate effect estimate, and the odds ratio can sometimes be interpreted as a risk ratio or rate ratio depending on the sampling method.1-4 However, in cohort studies and RCTs, odds ratios are often interpreted as risk ratios. This is problematic because an odds ratio always overestimates the risk ratio, and this overestimation becomes larger with increasing incidence of the outcome. 5 There are alternatives for logistic regression to obtain adjusted risk ratios, for example, the approximate adjustment method proposed by Zhang and Yu 5 and regression models that directly estimate risk ratios (also called "relative risk regression").6-9 Some of these methods have been compared in simulation studies. 7,9 The method by Zhang and Yu has been strongly criticized, 7,10 but regression models that directly estimate risk ratios are rarely applied in practice.In this paper, we illustrate the difference between risk ratios and odds ratios using clinical examples, and describe the magnitude of the problem in the literature. We also review methods to obtain adjusted risk ratios and evaluate these methods by means of simulations. We conclude with practical details on these methods and recommendations on their application. Misuse of odds ratios in cohort studies and RCTsAn odds ratio is calculated as the ratio of the odds of the outcome in the patients with the treatment or exposure and the odds of the outcome in the patients without the treatment or exposure. The risk ratio, also referred to as the relative risk, is calculated as the ratio of the risk of the outcome in these two groups. In this article, we illustrate, by means of two empirical examples, that use of odds ratios in cohort studies and RCTs can lead to misinterpretation of results.Clinical example 1: cohort study A cohort study evaluated the relation between changes in marital status of mothers and cannabis use by their children.11 Use of cannabis was reported by 48.6% of the participants at age 21. Table 1 presents the crude and adjusted odds ratios as reported in the paper for one to two changes in maternal marital status and the risk of cannabis use, and for three or more changes in maternal marital status and the risk of cannabis use. We calculated the corresponding crude and adjusted risk ratios (Table 1) based on the data provided in the article. The odds ratios and risk ratios were quite different: a modest increase of the risk by 50% (adjusted risk ratio is 1.5) was observed, whereas the "risk" seemed more than doubled when the odds ratio was interpreted as a risk ratio (adjusted odds ratio is 2.3). Analysis CMAJ • Odds ratios, often used in cohort studies and randomized controlled trials (RCTs), are often interpreted as risk ratios but always overestimate the risk ratio. Clinical example 2: RCT• We evaluated alterna...
Although missing outcome data are an important problem in randomized trials and observational studies, methods to address this issue can be difficult to apply. Using simulated data, the authors compared 3 methods to handle missing outcome data: 1) complete case analysis; 2) single imputation; and 3) multiple imputation (all 3 with and without covariate adjustment). Simulated scenarios focused on continuous or dichotomous missing outcome data from randomized trials or observational studies. When outcomes were missing at random, single and multiple imputations yielded unbiased estimates after covariate adjustment. Estimates obtained by complete case analysis with covariate adjustment were unbiased as well, with coverage close to 95%. When outcome data were missing not at random, all methods gave biased estimates, but handling missing outcome data by means of 1 of the 3 methods reduced bias compared with a complete case analysis without covariate adjustment. Complete case analysis with covariate adjustment and multiple imputation yield similar estimates in the event of missing outcome data, as long as the same predictors of missingness are included. Hence, complete case analysis with covariate adjustment can and should be used as the analysis of choice more often. Multiple imputation, in addition, can accommodate the missing-not-at-random scenario more flexibly, making it especially suited for sensitivity analyses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.