Background: Inadvertent intraoperative hypothermia is a common complication that affects patient comfort and morbidity. As the development of hypothermia is a complex phenomenon, predicting it using machine learning (ML) algorithms may be superior to logistic regression. Methods: We performed a single-center retrospective study and assembled a feature set comprised of 71 variables. The primary outcome was hypothermia burden, defined as the area under the intraoperative temperature curve below 37 °C over time. We built seven prediction models (logistic regression, extreme gradient boosting (XGBoost), random forest (RF), multi-layer perceptron neural network (MLP), linear discriminant analysis (LDA), k-nearest neighbor (KNN), and Gaussian naïve Bayes (GNB)) to predict whether patients would not develop hypothermia or would develop mild, moderate, or severe hypothermia. For each model, we assessed discrimination (F1 score, area under the receiver operating curve, precision, recall) and calibration (calibration-in-the-large, calibration intercept, calibration slope). Results: We included data from 87,116 anesthesia cases. Predicting the hypothermia burden group using logistic regression yielded a weighted F1 score of 0.397. Ranked from highest to lowest weighted F1 score, the ML algorithms performed as follows: XGBoost (0.44), RF (0.418), LDA (0.406), LDA (0.4), KNN (0.362), and GNB (0.32). Conclusions: ML is suitable for predicting intraoperative hypothermia and could be applied in clinical practice.