In 5G networks, specific requirements are defined on the periodicity of Synchronization Signaling (SS) bursts. This imposes a constraint on the maximum period a Base Station (BS) can be deactivated. On the other hand, BS densification is expected in 5G architecture. This will lead to an energy crunch if kept ignored. In this paper, we propose a distributed algorithm based on Reinforcement Learning (RL) that controls the states of the BSs while respecting the requirements of 5G. By considering different levels of Sleep Modes (SMs), the algorithm chooses how deep a BS can sleep according to the best switch-off SM level policy that maximizes the trade-off between energy savings and system delay. The latter is calculated based on the wake-up time required by the different SM levels. Results show that our algorithm performs better than the case of using only one type of SM. Furthermore, our simulations show a gain in energy savings up to 90% when the users are delay tolerant while respecting the periodicity of the SS bursts in 5G.
In 5G networks, specific requirements are defined on the periodicity of Synchronization Signaling (SS) bursts. This imposes a constraint on the maximum period a Base Station (BS) can be deactivated. On the other hand, BS densification is expected in 5G architecture. This will cause a drastic increase in the network energy consumption followed by a complex interference management. In this paper, we study the Energy-Delay-Tradeoff (EDT) problem in a Heterogeneous Network (HetNet) where small cells can switch to different sleep mode levels to save energy while maintaining a good Quality of Service (QoS). We propose a distributed Q-learning algorithm controller for small cells that adapts the cell activity while taking into account the co-channel interference between the cells. Our numerical results show that multi-level sleep scheme outperforms binary sleep scheme with an energy saving up to 80% in the case when the users are delay tolerant, and while respecting the periodicity of the SS bursts in 5G.
The massive deployment of small cells in 5G networks represents an alternative to meet the ever increasing mobile data traffic and to provide very-high throughout by bringing the users closer to the Base Stations (BSs). This large increase in the number of network elements demands a significant increase in the energy consumption and carbon footprint followed by complex interference management. In order to address these challenges, we consider multi-level Sleep Mode (SM) where BS components with similar activation/deactivation times can be put to sleep. The deeper and higher energy efficient the SM is, the longer it will take the BS to activate, which might impose degradation in the Quality of Service (QoS). While this adds operational flexibility to the BS, it brings complex management to the operator. In this paper, we consider a heterogeneous network architecture where small cells can switch to different SM levels to save energy and reduce dropping rate. We propose a reinforcement learning algorithm for small cells that adapts their activities subject to service delay constraint. In this regard, the algorithm intelligently learns from the environment based on the co-channel interference, the cell buffer size and the expected cell throughput in order to decide the best SM policy. Numerical values show that important energy savings can be obtained with an acceptable dropping rate. Moreover, we show that while offloading users to the macro cell can significantly reduce their delay, dropping rate and the cluster energy consumption, it comes at a cost of decreasing the network energy efficiency up to 5 times compared with the case of no offload.
In this paper, we propose a sleep strategy for energyefficient 5G Base Stations (BSs) with multiple Sleep Mode (SM) levels to bring down energy consumption. Such management of energy savings is coupled with managing the Quality of Service (QoS) resulting from waking up sleeping BSs. As a result, a tradeoff exists between energy savings and delay. Unlike prior work that studies this problem for binary state BS (ON and OFF), this work focuses on multi-level SM environment, where the BS can switch to several SM levels. We propose a Q-Learning algorithm that controls the state of the BS depending on the geographical location and moving velocity of neighboring users in order to learn the best policy that maximizes the tradeoff between energy savings and delay. We evaluate the performance of our proposed algorithm with an online suboptimal algorithm that we introduce as well. Results show that the Q-Learning algorithm performs better with energy savings up to 92% as well as better delay performance than the heuristic scheme.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.