The rapidly expanding area of rockburst prediction has drawn a lot of interest because of its enormous potential to lower the risk of engineering disasters, enhance mine production safety, and protect employee lives. Consequently, the goal of this research is to forecast the rockburst intensity class for the prediction objective by optimizing four single machine learning models (SVM, DT, CNN, and RF) utilizing fifteen optimization algorithms (Bayes, SSA, DBO, SCA, SA, PSO, SO, POA, GWO, IGWO, AVOA, CSA, GTO, NGO, and WSO). The hybrid models were trained using a ten-fold cross-validation, and each hybrid model's performance was examined statistically. The SMOTE method then oversampled the original dataset in order to examine how the data equalization issue affected the hybrid models. The findings demonstrate that, in the original dataset, all optimization strategies increase the accuracy of the DT, CNN, and RF models; however, the balanced original dataset has a greater impact on the SVM models. And once the dataset is balanced, every optimization algorithm improves the accuracy of the SVM model and decreases the accuracy of the DT model; however, for the CNN and RF models, the majority of optimization algorithms improve the accuracy while only a small percentage of them do the opposite. An essential reference value for the development of later rock burst prediction models is provided by this study.