Microseismic data are widely employed for assessing rockburst risks; however, significant disparities exist in the monitoring capabilities of seismic networks across different mines, and none can capture a complete dataset of microseismic events. Such differences introduce unfairness when applying the same methodologies to evaluate rockburst risks in various mines. This paper proposes a method for assessing the monitoring capability of seismic networks applicable to heterogeneous media in mines. It achieves this by integrating three gradient boosting algorithms: Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), and Categorical Boosting (CatBoost). Initially, the isolation forest algorithm is utilized for preliminary data cleansing, and feature engineering is constructed based on the relative locations of event occurrences to monitoring stations and the working face. Subsequently, the optimal hyperparameters for three models are searched for using 8508 microseismic events from the a Coal Mine in eastern China as samples, and 18 sub-models are trained. Model weights are then determined based on the performance metrics of different algorithms, and an ensemble model is created to predict the monitoring capability of the network. The model demonstrated excellent performance on the training and test sets, achieving log loss, accuracy, and recall scores of 7.13, 0.81, and 0.76 and 6.99, 0.80, and 0.77, respectively. Finally, the method proposed in this study was compared with traditional approaches. The results indicated that, under the same conditions, the proposed method calculated the monitoring capability of the key areas to be 11% lower than that of the traditional methods. The reasons for the differences between these methods were identified and partially explained.