Cloud radio access network (CRAN) has been shown as an effective means to boost network performance. Such gain stems from the intelligent management of remote radio heads (RRHs) in terms of on/off operation mode and power consumption. Most conventional resource allocation (RA) methods, however, optimize the network utility without considering the switching overhead of RRHs in adjacent time intervals. When the network environment becomes time-correlated, mathematical optimization is not directly applicable. In this paper, we aim to optimize the energy efficiency (EE) subject to the constraints on per-RRH transmission power and user data rates. To this end, we formulate the EE problem as a Markov decision process (MDP) and subsequently adopt deep reinforcement learning (DRL) technique to reap the cumulative EE rewards. Our starting point is the deep Q network (DQN), which is a combination of deep learning and Q-learning. In each time slot, DQN configures the status of RRHs yielding the largest Q-value (known as state-action value) prior to solving a power minimization problem for active RRHs. To overcome the Q-value overestimation issue of DQN, we propose a Double DQN (DDQN) framework that obtains optimal reward better than DQN by separating the selected action from the target Q-value generator. Simulation results validate that the DDQN-based RA method is more energy-efficient than the DQN-based RA algorithm and a baseline solution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.