The radiant ceiling cooling system is widely adopted in modern office buildings as it improves cooling source efficiency and reduces fossil fuel usage and carbon dioxide emissions by utilizing low-grade natural energy. However, the nonlinear behavior and significant inertia of the radiant ceiling cooling system pose challenges for control systems. With advancements in computer technology and artificial intelligence, the deep reinforcement learning (DRL) method shows promise in the operation and control of radiant cooling systems with large inertia. This paper compares the DRL control method with traditional control methods for radiant ceiling cooling systems in two typical office rooms across three different regions. Simulation results demonstrate that with an indoor target temperature of 26 °C and an allowable fluctuation range of ±1 °C, the DRL on–off or varied water temperature control satisfies the indoor temperature fluctuation requirements for 80% or 93–99% of the operating time, respectively. In contrast, the traditional on–off or PID variable water temperature control only meets these requirements for approximately 70% or 90–93% of the operating time. Furthermore, compared to traditional on–off control, the DRL control can save energy consumption in the radiant ceiling cooling system by 3.19% to 6.30%, and up to 10.48% compared to PID variable water temperature control. Consequently, the DRL control method exhibits superior performance in terms of minimizing indoor temperature fluctuations and reducing energy consumption in radiant ceiling cooling systems.