Rate-Splitting Multiple Access (RSMA) has been recognized as an effective technique to reconcile the tradeoff between decoding interference and treating interference as noise in 6G and beyond networks. In this paper, in line with the need for network sustainability, we study the energy-efficient power and rate allocation of the common and private messages transmitted in the downlink of a single-cell single-antenna RSMA network. Contrary to the literature that resorts to heuristic approaches to deal with the joint problem, we transform the formulated energy efficiency maximization problem into a multiagent Deep Reinforcement Learning (DRL) problem, based on which each transmitted private message represents a different DRL agent. Each agent explores its own state-action space, the size of which is fixed and independent of the number of agents, and shares its gained experience by exploration with a common neural network. Two DRL algorithms, namely the value-based Deep Q-Learning (DQL) and the policy-based REINFORCE, are properly configured and utilized to solve it. The adaptation of the proposed DRL framework is also demonstrated for the treatment of the considered network's sum-rate maximization objective. Numerical results obtained via modeling and simulation verify the effectiveness of the proposed DRL framework to conclude a solution to the joint problem under both optimization objectives, outperforming existing heuristic approaches and algorithms from the literature.INDEX TERMS Energy efficiency maximization, rate-splitting multiple access (RSMA), deep reinforcement learning (DRL).