Abstract. Precipitation estimation with high accuracy and resolution is crucial for hydrological and meteorological applications, particularly in ungauged river basins and regions with scarce water resources. Many machine learning (ML) algorithms have been employed in the downscaling of precipitation, however, it remains unclear which algorithm can outperform others. To address this issue, this study evaluates the performance of four ML based downscaling methods to generate high-resolution precipitation estimates at an annual scale. The satellite-derived precipitation data, environmental variables, such as, latitude, longitude, normalized difference vegetation index (NDVI), digital elevation model (DEM), and land surface temperature (LST), as well as the observations from rainfall gauges were used to constructed the regression models. The performance of the four ML algorithms including the Support Vector Regression (SVR), Random Forest (RF), Spatial Random Forest (SRF), and Extreme Gradient Boosting (XGBoost) algorithms was compared with three conventional methods: Multiple Linear Regression (MLR), geographically weighted regression (GWR) and Kriging interpolation model. Results showed that ML-based method generally outperformed traditional interpolation methods in precipitation downscaling, as they had higher accuracy and were better at reproducing the spatial distribution of rainfall. Out of ML approaches, XGBoost received the best performance, followed by SRF, RF and SVR, indicating its robustness of capturing nonlinear relationships. After the XGBoost, better performance of SRF than RF and SVR was found. This might be because the SRF just introduced the spatial autocorrelation into the RF models, which illustrated the importance of capturing spatial variations in ML algorithms. These findings regarding the comparison and assessment provided a novel downscaling method for generating high-resolution precipitation data, which could benefit regional flood forecasting, drought monitoring, and irrigation planning.