2020 IEEE 17th Annual Consumer Communications &Amp; Networking Conference (CCNC) 2020
DOI: 10.1109/ccnc46108.2020.9045699
|View full text |Cite
|
Sign up to set email alerts
|

Load Balancing in Cellular Networks: A Reinforcement Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…They define the system reward as the delay reduction after performing a caching action. In [6], [7], [12], [13], [15], RL was adopted aiming to reach load balancing in the network with an end goal of maximizing the sum throughput.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…They define the system reward as the delay reduction after performing a caching action. In [6], [7], [12], [13], [15], RL was adopted aiming to reach load balancing in the network with an end goal of maximizing the sum throughput.…”
Section: Literature Reviewmentioning
confidence: 99%
“…To motivate the problem further, consider the following concrete scenarios: First, consider the scenario of optimizing a cellular network with heterogeneous traffic loads to maximize the total throughput of the network. In this case, a controller of the base stations (a.k.a., eNBs in LTE) may opt to offload the traffic from heavily congested cells to less congested cells using Cell Individual Offsets (CIOs) as in [6]- [8]. The CIO is a power offset that controls the handover power level threshold without affecting the QoE of the mid-cell users.…”
Section: Introductionmentioning
confidence: 99%
“…Load-balancing techniques are ubiquitous in the literature as well. In [6] and [7], the authors design an RL framework for optimizing cell parameters to balance traffic load across the cells. Their target is controlling the CIO of neighboring cells to force cell-edge users to hand over from congested cells into lighter-load cells.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Deep Q-Networks relaxes the need for state discretization by using deep network to approximate the qQfunction, leaving only the action space to be discretized. The work of Attiah et al (2020) and Alsuhli et al (2021b) are, respectively, based on DQN and the double DQN (DDQN) method proposed by Van Hasselt et al (2016). Although both papers use the RL agent to control CIO values, Attiah et al (2020) simplifies the problem by considering only a single CIO value for all neighboring cells, whereas Alsuhli et al (2021b) considers separate CIO values for each neighboring cell.…”
Section: Q-learning For Load Balancingmentioning
confidence: 99%