Climatic simulations rely heavily on high-performance computing. As one of the atmospheric radiative transfer models, the rapid radiative transfer model for general circulation models (RRTMG) is used to calculate the radiative transfer of electromagnetic radiation through a planetary atmosphere. Radiation physics is one of the most time-consuming physical processes, so the RRTMG presents large-scale and long-term simulation challenges to the development of efficient parallel algorithms that fit well into multicore clusters. This paper presents a method for improving the calculative efficiency of radiation physics, an RRTMG long-wave radiation scheme (RRTMG_LW) that is accelerated on a graphics processing unit (GPU). First, a GPU-based acceleration algorithm with one-dimensional domain decomposition is proposed. Then, a second acceleration algorithm with two-dimensional domain decomposition is presented. After the two algorithms were implemented in Compute Unified Device Architecture (CUDA) Fortran, a GPU version of the RRTMG_LW, namely G-RRTMG_LW, was developed. Results demonstrated that the proposed acceleration algorithms were effective and that the G-RRTMG_LW achieved a significant speedup. In the case without I/O transfer, the 2-D G-RRTMG_LW on one K40 GPU obtained a speed increase of 18.52× over the baseline performance on a single Intel Xeon E5-2680 CPU core.