Purpose Inconsistent results have been reported in the literature on the association between obesity, expressed as increased body mass index (BMI), and risk for surgical site infection (SSI) following spine surgery. The objective of this study was to review and quantify the association between increased BMI and risk of spinal SSI in adults. Methods We performed a comprehensive search for relevant studies using PubMed, Embase, and references of published manuscripts. Study-specific risk measures were transformed into slope estimates and combined using the random effects meta-analysis model to establish the risk of SSI associated with every 5-unit increase in BMI. Results Thirty-four articles underwent full-text review. Variations were noted among these studies in relation to SSI diagnosis criteria and BMI cut-off levels used to define obesity. Data from 12 retrospective studies were included in the analyses. Results showed that BMI was significantly positively associated with the risk of spinal SSI. Unadjusted risk estimates demonstrated that a 5-unit increase in BMI was associated with 13 % increased risk of SSI [Crude odds ratio (OR): 1.13; 95 % CI: 1.07-1.19, p\ 0.0001]. Pooling of risk estimates adjusted for diabetes and other confounders resulted in a 21 % increase in risk of spinal SSI for every 5-unit increase in BMI (adjusted OR: 1.21; 95 % CI 1.13-1.29, p \ 0.0001). ConclusionHigher BMI is associated with the increased risk of SSI following spine surgery. Prospective studies are needed to confirm this association and to determine whether other measures of fat distribution are better predictors of risk of SSI.
We would like to respond to the Letter to the Editor concerning our meta-analysis [1].For the electronic search, we used PubMed, which provides a search interface to Medline, and Embase. These two search engines complement each other [2], and, in addition to CENTRAL, contain the largest number of studies according to the Cochrane Handbook of Systematic Reviews of interventions [3]. We did not search CEN-TRAL, as the focus of our research was on observational studies. The search, supplemented by screening of reference lists, identified 34 studies relevant to our research question, which is not a small number. However, it was not appropriate to pool results from all 34 studies. It is not uncommon to pool results from \10 studies. Between 50 and 75 % of meta-analyses contain \10 studies, including Cochrane reviews [4,5]. While we know that a single search engine does not yield all pertinent studies, we are not aware of guidelines that specify at least three databases to be searched. Nevertheless, during literature search we used Google Scholar to identify additional studies. Despite the large number of results, the search did not yield relevant studies beyond those already identified. We did not formally include Google Scholar, as the search engine has low specificity and does not utilise controlled vocabulary relevant to the research question [6].Ideally, to circumvent the problem of publication bias, a meta-analysis should include unpublished literature and non-English language studies, but this is not always feasible and is a limitation of many published meta-analyses, not only ours. We conducted a formal assessment of publication bias using more than one method and did not rely on visual inspection of funnel plots. Applying these methods when the number of studies is small is not wrong, but has limited power [4]. We reported the results of these tests and were careful in our interpretations by acknowledging the low power of Egger's test. We never claimed that there was no publication bias, and we stated that ''we were unable to reliably assess the presence of publication bias due to the small number of studies included'' [1]. We would also like to draw the attention of the writers of the Letter that we explicitly discussed English language bias and the potential over-estimation of results in the discussion section.In relation to assessment of study quality, first, we assigned the 'level of evidence' to each study, which is a ''hierarchical rating system for classifying study quality'' [7]. It is a well-established scoring system and is used by several journals. Second, to minimise bias, we used several inclusion criteria, and listed all excluded studies as well as the reasons for their exclusion. We reported all information that would potentially introduce bias, such as study design, categorisation of BMI, and adjustment for confounders. No subjectivity was involved in extracting the data. Therefore, data extraction was reviewed by the second author to ensure correctness rather than any other degree of agreement...
Background: In literature-based meta-analyses of cancer prognostic studies, methods for extracting summary statistics from published reports have been extensively employed. However, no assessment of the magnitude of bias produced by these methods or comparison of their influence on fixed vs. random effects models have been published previously. Therefore, the purpose of this study is to empirically assess the degree of bias produced by the methods used for extracting summary statistics and examine potential effects on fixed and random effects models. Methods: Using published data from cancer prognostic studies, systematic differences between reported statistics and those obtained indirectly using log-rank test p-values and total number of events were tested using paired t tests and the log-rank test of survival-agreement plots. The degree of disagreement between estimates was quantified using an information-based disagreement measure, which was also used to examine levels of disagreement between expressions obtained from fixed and random effects models. Results: Thirty-four studies provided a total of 65 estimates of lnHR and its variance. There was a significant difference between the means of the indirect lnHRs and the reported values (mean difference = -0.272, t = -4.652, p-value <0.0001), as well as between the means of the two estimates of variances (mean difference = -0.115, t = -4.5556, p-value <0.0001). Survival agreement plots illustrated a bias towards under-estimation by the indirect method for both lnHR (log-rank p-value = 0.031) and its variance (log-rank p-value = 0.0432). The magnitude of disagreement between estimates of lnHR based on the information-based measure was 0.298 (95% CI: 0.234 – 0.361) and, for the variances it was 0.406 (95% CI: 0.339 – 0.470). As the disagreement between variances was higher than that between lnHR estimates, this increased the level of disagreement between lnHRs weighted by the inverse of their variances in fixed effect models. In addition, results indicated that random effects meta-analyses could be more prone to bias than fixed effects meta-analyses as, in addition to bias in estimates of lnHRs and their variances, levels of disagreement as high as 0.487 (95% CI: 0.416 – 0.552) and 0.568 (95% CI: 0.496 – 0.635) were produced due to between-studies variance calculations. Conclusions: Extracting summary statistics from published studies could introduce bias in literature-based meta-analyses and undermine the validity of the evidence. These findings emphasise the importance of reporting sufficient statistical information in research articles and warrant further research into the influence of potential bias on random effects models.
Background: In literature-based meta-analyses of cancer prognostic studies, methods for extracting summary statistics from published reports have been extensively employed. However, no assessment of the magnitude of bias produced by these methods or comparison of their influence on fixed vs. random effects models have been published previously. Therefore, the purpose of this study is to empirically assess the degree of bias produced by the methods used for extracting summary statistics and examine potential effects on fixed and random effects models. Methods: Using published data from cancer prognostic studies, systematic differences between reported statistics and those obtained indirectly using log-rank test p-values and total number of events were tested using paired t tests and the log-rank test of survival-agreement plots. The degree of disagreement between estimates was quantified using an information-based disagreement measure, which was also used to examine levels of disagreement between expressions obtained from fixed and random effects models. Results: Thirty-four studies provided a total of 65 estimates of lnHR and its variance. There was a significant difference between the means of the indirect lnHRs and the reported values (mean difference = -0.272, t = -4.652, p-value <0.0001), as well as between the means of the two estimates of variances (mean difference = -0.115, t = -4.5556, p-value <0.0001). Survival agreement plots illustrated a bias towards under-estimation by the indirect method for both lnHR (log-rank p-value = 0.031) and its variance (log-rank p-value = 0.0432). The magnitude of disagreement between estimates of lnHR based on the information-based measure was 0.298 (95% CI: 0.234 – 0.361) and, for the variances it was 0.406 (95% CI: 0.339 – 0.470). As the disagreement between variances was higher than that between lnHR estimates, this increased the level of disagreement between lnHRs weighted by the inverse of their variances in fixed effect models. In addition, results indicated that random effects meta-analyses could be more prone to bias than fixed effects meta-analyses as, in addition to bias in estimates of lnHRs and their variances, levels of disagreement as high as 0.487 (95% CI: 0.416 – 0.552) and 0.568 (95% CI: 0.496 – 0.635) were produced due to between-studies variance calculations. Conclusions: Extracting summary statistics from published studies could introduce bias in literature-based meta-analyses and undermine the validity of the evidence. These findings emphasise the importance of reporting sufficient statistical information in research articles and warrant further research into the influence of potential bias on random effects models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.