2015
DOI: 10.1016/j.asoc.2015.09.017
|View full text |Cite
|
Sign up to set email alerts
|

Application of reinforcement learning for security enhancement in cognitive radio networks

Abstract: Please cite this article in press as: M.H. Ling, et al., Application of reinforcement learning for security enhancement in cognitive radio networks, Appl. Soft Comput. J. (2015), http://dx. a b s t r a c tCognitive radio network (CRN) enables unlicensed users (or secondary users, SUs) to sense for and opportunistically operate in underutilized licensed channels, which are owned by the licensed users (or primary users, PUs). Cognitive radio network (CRN) has been regarded as the next-generation wireless network… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(23 citation statements)
references
References 63 publications
0
23
0
Order By: Relevance
“…However, privacy preservation in this technique is still a concern. Lei et al, [97] Mee et al, [98] studied the use of Reinforcement Learning (RL), which promotes to attain optimum results for the enhancement of security by detecting the many malicious nodes and their attacks. The performance enhancements achieved by the intelligent approach of RL including low probability of false positive and missed detection, high detection rate, and utilization gain.…”
Section: Software-defined Radio Securitymentioning
confidence: 99%
“…However, privacy preservation in this technique is still a concern. Lei et al, [97] Mee et al, [98] studied the use of Reinforcement Learning (RL), which promotes to attain optimum results for the enhancement of security by detecting the many malicious nodes and their attacks. The performance enhancements achieved by the intelligent approach of RL including low probability of false positive and missed detection, high detection rate, and utilization gain.…”
Section: Software-defined Radio Securitymentioning
confidence: 99%
“…On the other hand, several results on privacy preserving have been obtained, but these are all studies on theory and application using encryption and homomorphic mapping. [18][19][20][21] Our method attempts to realize SMC by simple secret computation processing which does not use such complicated cryptographic processing and homomorphic mapping. The aim is to reduce the computational complexity of client while keeping the secret of data…”
Section: Q-learning For Secure Multiparty Computationmentioning
confidence: 99%
“…In Miyajima, et al, [17,22] learning methods for SMC of BP, fuzzy system and VQ methods have been proposed and the validity of them has been proved. On the other hand, though there are some studies on privacy preserving with RL, [18][19][20][21] they are ones of cryptogram algorithms. It seems that there do not exist any studies with SMC.…”
Section: Introductionmentioning
confidence: 99%
“…The specific steps for an agent using Q-learning are visible in Figure 2 and defined as: -Determine the current state x t ; -Choose an action a t , either by exploring (with a probability of ε) or exploiting (with a probability of 1 − ε) the current Q-value; -Receive a reward r t for having performed the action; -Observe the resulting state x t+1 ; -Adjust the Q-value using a learning rate α, according to: Choose an action , either by exploring (with a probability of ) or exploiting (with a probability of 1 − ) the current Q-value; -Receive a reward for having performed the action; -Observe the resulting state ; -Adjust the Q-value using a learning rate , according to: Choosing an action by exploration just means randomly selecting one of the possible values. Choosing an action by exploitation means selecting the action with the highest Q-value [54].…”
Section: Simple Single Objective Q-learningmentioning
confidence: 99%
“…In the context of a maximization function, superior means greater or Choosing an action by exploration just means randomly selecting one of the possible values. Choosing an action by exploitation means selecting the action with the highest Q-value [54].…”
Section: Pareto Optimizationmentioning
confidence: 99%