2021
DOI: 10.2298/csis200710055y
|View full text |Cite
|
Sign up to set email alerts
|

Deep reinforcement learning for resource allocation with network slicing in cognitive radio network

Abstract: With the development of wireless communication technology, the requirement for data rate is growing rapidly. Mobile communication system faces the problem of shortage of spectrum resources. Cognitive radio technology allows secondary users to use the frequencies authorized to the primary user with the permission of the primary user, which can effectively improve the utilization of spectrum resources. In this article, we establish a cognitive network model based on under1 lay model and propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(15 citation statements)
references
References 21 publications
0
15
0
Order By: Relevance
“…Another pioneering work is reported in [181] in which the authors propose a single-agent Double DQN-based DRL to address the problem of joint channel selection and power allocation with network slicing in CRNs. Their study aims to provide high SE and QoS for cognitive users.…”
Section: ) In Cellular Networkmentioning
confidence: 99%
“…Another pioneering work is reported in [181] in which the authors propose a single-agent Double DQN-based DRL to address the problem of joint channel selection and power allocation with network slicing in CRNs. Their study aims to provide high SE and QoS for cognitive users.…”
Section: ) In Cellular Networkmentioning
confidence: 99%
“…The proposals allocate the radio and power resources to the slices. In [112], a method called CNDDQN based on DDQN (Double Deep Q Network) is proposed for resource allocation in cognitive RAN slicing. In this method, radio and power resources are allocated to eMBB and uRLLC slices by considering SE and QoE of users (which is defined as the rate of sent packets to the total number of packets for each user).…”
Section: ) Energy Efficiencymentioning
confidence: 99%
“…In RL methods, GAN-DQN ( [127]) and GAT-DQN ( [91], [92]) methods have been used to improve decision making in DQN methods. Also, Double DQN ( [115], [112], [84]) and Dueling DQN ( [106], [113], [93], [135]) have also been used to improve decision-making and overestimation in DQN-based methods. Due to the use of AC methods in [79] - [81], decisions may fluctuate, so A2C and DDPG are used to address this problem in [110], [82], [115], [116], [83], [107], [132].…”
Section: ) Lessons Learnedmentioning
confidence: 99%
See 1 more Smart Citation
“…Using experimental results, it is shown that agents learn to satisfy the latency constraints on V2V links while minimizing the interference to V2I communications. The authors in [176] propose a single-agent Double DQNbased DRL to address the problem of joint channel selection and power allocation with network slicing in CRNs. The aim of their study is to provide high SE and QoS for cognitive users.…”
Section: ) In Cellular and Homnetsmentioning
confidence: 99%