Introduction: The aim of this study was to compare various machine learning algorithms for constructing a diabetic retinopathy (DR) prediction model among type 2 diabetes mellitus (DM) patients and to develop a nomogram based on the best model. Methods: This cross-sectional study included DM patients receiving routine DR screening. Patients were randomly divided into training (244) and validation (105) sets. Least absolute shrinkage and selection operator regression was used for the selection of clinical characteristics. Six machine learning algorithms were compared: decision tree (DT), k-nearest neighbours (KNN), logistic regression model (LM), random forest (RF), support vector machine (SVM), and XGBoost (XGB). Model performance was assessed via receiver-operating characteristic (ROC), calibration, and decision curve analyses (DCAs). A nomogram was then developed on the basis of the best model. Results: Compared with the five other machine learning algorithms (DT, KNN, RF, SVM, and XGB), the LM demonstrated the highest area under the ROC curve (AUC, 0.894) and recall (0.92) in the validation set. Additionally, the calibration curves and DCA results were relatively favourable. Disease duration, DPN, insulin dosage, urinary protein, and ALB were included in the LM. The nomogram exhibited robust discrimination (AUC: 0.856 in the training set and 0.868 in the validation set), calibration, and clinical applicability across the two datasets after 1,000 bootstraps. Conclusion: Among the six different machine learning algorithms, the LM algorithm demonstrated the best performance. A logistic regression-based nomogram for predicting DR in type 2 DM patients was established. This nomogram may serve as a valuable tool for DR detection, facilitating timely treatment.