To the Editor We read with concern the article published in JAMA Surgery by Rozeboom and colleagues. 1 The report is compliant with Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis, 2 but it is a good example where this does not equate with study risk of bias. 3 The authors aimed to develop a model to predict intensive care unit (ICU) admission within 30 postoperative days, with the rationale that it would help risk stratification and allow better resource utilization. However, for several reasons, study design (population, predictors, outcome) and statistical analysis seem inappropriate to answer that question.The authors did not match the study population to the target population that would benefit from the model. Outpatient surgery should not be considered for a model predicting ICU utilization because not even hospital admission was considered at the outstart. Likewise, patients with ongoing need for organ support or who were in the ICU at the time of surgery should also have been excluded from the analysis.The choice of predictors and their reference categories also needs consideration. General surgery is likely a risk factor for ICU admission compared with urological/gynecological surgery, for example, and would be better modeled as a risk factor instead of a reference category. Work relative value units, while capturing the extent of surgery, is developed for reimbursement purposes and lacks the property of being an easily measurable and applicable variable. Finally, only 13 American Society of Anesthesiology I patients were admitted to the ICU, making it a category that should have been collapsed.The outcome time frame does not match the study objectives either. A reasonable time frame would be at most 48 hours, while a more useful outcome definition would be actual use of ICU-based organ support with a clear differentiation of the reason for ICU admission.Finally, the statistical analysis may have introduced bias for several reasons 4 : use of stepwise-forward method for variable selection 4 ; Hosmer-Lemeshow to assess calibration without presenting calibration in the large and calibration slope measures 5 ; absence of internal validation in the derivation sample; and absence of shrinkage techniques in coefficient estimates nor optimism-adjusted metrics (C statistic, calibration in the large, calibration slope, Brier).If we apply the Prediction Model Risk Of Bias Assessment tool, 3 the article scores as high risk of bias in all 4 aforementioned domains along with high concern for applicability in all domains. Unfortunately, large sample sizes are not a remedy for methodological issues, so this model is not fit for purpose and should not be applied at this time.