2010
DOI: 10.1109/tvt.2010.2043124
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Q-Learning for Aggregated Interference Control in Cognitive Radio Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
136
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 203 publications
(136 citation statements)
references
References 12 publications
0
136
0
Order By: Relevance
“…Several solutions can be envisaged to tackle this problem e.g. by reducing the states space through fuzzy RL algorithms [18], or by distributing the learning process using a cooperative multi-agent RL approach [19].…”
Section: Simulation Resultsmentioning
confidence: 99%
“…Several solutions can be envisaged to tackle this problem e.g. by reducing the states space through fuzzy RL algorithms [18], or by distributing the learning process using a cooperative multi-agent RL approach [19].…”
Section: Simulation Resultsmentioning
confidence: 99%
“…This approach requires user-specific utility parameters which cannot always be acquired in many situations [4]. The authors in [14] integrated a Q-learning method with an ANN. However, both RL and ANN suffer from the same aforementioned less generalization problem; moreover, ANNs slow calculation rate at run-time, local minima, and over-fitting problems are some of the additional limitations.…”
Section: Rrm and Icic Related Workmentioning
confidence: 99%
“…Despite its limitations, the independent Q-learning approach has been widely adopted in the cognitive radio literature. In some cases (e.g., [36]), the issues related to convergence are acknowledged and simulation results are presented to show that the agents achieve an equilibrium. In other cases (e.g., [37]), the question of convergence is not discussed.…”
Section: B Reinforcement-learning Techniquesmentioning
confidence: 99%