Solar irradiance models contribute to mitigating the lack of measurement data at a ground station. Conventionally, the models relied on physical calculations or empirical correlations. Recently, machine learning as a sophisticated statistical method has gained popularity due to its accuracy and potential. While some studies compared machine learning models with other models, a study has not yet been performed that compares them side-by-side to assess their performance using the same datasets in different locations. Therefore, this study aims to evaluate the accuracy of three representative models for estimating solar irradiance using atmospheric variables measurement and cloud amount derived from satellite images as the input parameters. Based on its applicability and performance, this study selected the fast all-sky radiation model for solar applications (FARMS) derived from the radiative transfer approach, the Hammer model that simplified atmospheric correlation, and the long short-term memory (LSTM) model specialized in sequential datasets. Global horizontal irradiance (GHI) data were modeled for five distinct locations in South Korea and compared with hourly measurement data of two years to yield the error metrics. When identical input parameters were used, LSTM outperformed the FARMS and the Hammer model in terms of relative root mean square difference (rRMSD) and relative mean bias difference (rMBD). Training an LSTM model using the input parameters of FARMS, such as ozone, nitrogen, and precipitable water, yielded more accurate results than using the Hammer model. The result shows unbiased and accurate estimation with an rRMSD and rMBD of 23.72% and 0.14%, respectively. Conversely, the FARMS has a faster processing speed and does not require significant data to make a fair estimation.