2019 International Conference on Information and Communication Technology Convergence (ICTC) 2019
DOI: 10.1109/ictc46691.2019.8939986
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning Based 5G Enabled Cognitive Radio Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…For this purpose, some commonly used ML schemes are discussed ahead. RL (reinforcement learning) is a fluky methodology, and the algorithm is robust [40][41][42][43]; however, this architecture does not guarantee improved latencies and delays. erefore, this technique is not recommended for applications where robust and quick decisions are required.…”
Section: Contributionsmentioning
confidence: 99%
“…For this purpose, some commonly used ML schemes are discussed ahead. RL (reinforcement learning) is a fluky methodology, and the algorithm is robust [40][41][42][43]; however, this architecture does not guarantee improved latencies and delays. erefore, this technique is not recommended for applications where robust and quick decisions are required.…”
Section: Contributionsmentioning
confidence: 99%
“…Contrarywise, Puspita et al ( 2019) conducted a study on cognitive radio that involved the use of RL algorithm (13). Their research presented insight on an RL-based cognitive radio network system to provide the most efficient frequency to use for the most efficient spectrum management.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Initially CRN observes the present state of SU at its i th intervention. Then after interruption CR node chooses a handoff activity for (i+1) th interruption caused by PU again based on present state from Q table [6,13]. So SU makes a move to next state 𝑠𝑗,𝑖+1 as a result MOS is obtained as reward and further updates the Q table [32].…”
Section: States Of Sus Associationsmentioning
confidence: 99%