2020 IEEE 3rd International Conference on Computer and Communication Engineering Technology (CCET) 2020
DOI: 10.1109/ccet50901.2020.9213138
|View full text |Cite
|
Sign up to set email alerts
|

Double Deep Q-Network for Power Allocation in Cloud Radio Access Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 20 publications
0
12
0
Order By: Relevance
“…3) In the RL approach, each action is matched to a corresponding reward, and the system learns the optimal actions that lead to the greatest accumulation of rewards. [12] devised a double deep Q-network-based resource allocation method that minimizes the total power consumption subject to the constraints on the transmit power of each remote radio head (RRH) and user rates in the cloud radio access network. [13] is an extension of [12] which focuses on energy efficiency maximization instead of power minimization.…”
Section: A Motivationmentioning
confidence: 99%
“…3) In the RL approach, each action is matched to a corresponding reward, and the system learns the optimal actions that lead to the greatest accumulation of rewards. [12] devised a double deep Q-network-based resource allocation method that minimizes the total power consumption subject to the constraints on the transmit power of each remote radio head (RRH) and user rates in the cloud radio access network. [13] is an extension of [12] which focuses on energy efficiency maximization instead of power minimization.…”
Section: A Motivationmentioning
confidence: 99%
“…Iqbal et al in [100] investigated the energy efficiency maximization problem in C-RAN maintaining the per-RRH transmission power and user data rate constraints. Unlike the previously discussed studies, this work utilized a double DQN (DDQN) which incorporates an additional target DQN with the main DQN.…”
Section: A Power Consumption Optimizationmentioning
confidence: 99%
“…The proposed DDQN-based DRL model in [100] was implemented with an FFNN containing two hidden layers with 64 and 32 neuron. ReLU was used as the activation function and an experience replay with capacity 500 was considered.…”
Section: Evaluation Techniques For Dl-based C-ranmentioning
confidence: 99%
“…A preliminary version of this paper was presented at IEEE CCET 2020 [16]. This version is extension of that paper which focuses on EE maximization instead of power minimization.…”
Section: Introductionmentioning
confidence: 99%
“…3. Previous works [16], [31], and [35] rely on the static state-space feature to maximize the certain objective function. In this work, we optimize the objective function by considering the dynamic state-space feature, which is dynamically updated by the movement of UEs at each step.…”
Section: Introductionmentioning
confidence: 99%