o resemble the body flexibility of biological snakes, snake-like robots are designed as a chain of body modules, which gives them many degrees of freedom (DoF) on the one hand and leads to a challenging task to control them on the other. Compared with conventional model-based control methods, reinforcement learning (RL)-based ones provide promising solutions to design agile and energy-efficient gaits for snakelike robots as RL-based methods can fully exploit the hyperredundant bodies of the robots. However, RL-based methods for snake-like robots have rarely been investigated even in simulations, let alone been deployed on real-world snake-like robots. In this work, we introduce a novel approach for designing energy-efficient gaits for a snake-like robot, which first learns a policy using an RL algorithm in simulation and then transfers it to the real-world testing, thereby leveraging a fast and economical gait-generation process. We evaluate our RL-based approach in both simulations and real-world experiments to demonstrate that it can generate substantially more energy-efficient gaits than those generated by conventional model-based controllers.