2021
DOI: 10.21203/rs.3.rs-483062/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Reinforcement Learning based Computing Offloading and Resource Allocation Scheme in F-RAN

Abstract: This paper investigates a computing offloading policy and the allocation of computational resource for multiple user equipments (UEs) in Device-to-Device (D2D) aided fog radio access networks (F-RANs). Concerning the dynamically changing wireless environment where the channel state information (CSI) is difficult to predict and know exactly, we formulate the problem of task offloading and resource optimization as a mixed-integer nonlinear programming problem to maximize the total utility of all UEs. Concerning … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…This indicates that the dueling DQN algorithm might achieve a higher performance than the DQN method when it comes to handling problems related to offloading policies and resources. The work in [114] adopts a dueling DQN algorithm to choose the most suitable offloading strategy for each user equipment (UE), which includes offloading to fog access points (FAP), proximity idle UEs, or processing by itself. As the number of UEs requiring offloading increases, the centralized dueling DQN algorithm becomes more complex.…”
Section: ) Dueling Dqnmentioning
confidence: 99%
“…This indicates that the dueling DQN algorithm might achieve a higher performance than the DQN method when it comes to handling problems related to offloading policies and resources. The work in [114] adopts a dueling DQN algorithm to choose the most suitable offloading strategy for each user equipment (UE), which includes offloading to fog access points (FAP), proximity idle UEs, or processing by itself. As the number of UEs requiring offloading increases, the centralized dueling DQN algorithm becomes more complex.…”
Section: ) Dueling Dqnmentioning
confidence: 99%
“…DQN may suffer from overestimation due to choosing the maximum action value each time in order to avoid this situation, Double DQN are generated, in [33], delay constraints and uncertain resource requirements of heterogeneous computing tasks, and in order to avoid dimensional disasters and overestimation using DDQN-based algorithms, which eventually proved the effectiveness of the algorithm. It is also possible to change the network structure of DQN to produce Dueling DQN, [34] firstly preprocesses the data and then uses Dueling DQN for optimization of the objective function, its convergence speed is faster than DQN.…”
Section: Related Workmentioning
confidence: 99%