2013
DOI: 10.1109/twc.2013.060513.120959
|View full text |Cite
|
Sign up to set email alerts
|

Self-Organization in Small Cell Networks: A Reinforcement Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
139
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 187 publications
(145 citation statements)
references
References 28 publications
0
139
0
Order By: Relevance
“…It is capable of learning an unknown environment's statistics as well as of taking actions in the environment so as to maximize the cumulative reward, where the environment itself may be changed by the agent's actions. Reinforcement learning has been widely adopted in communications and networks [37], in control [38], in finance and economics [39], as well as in social science [40]. Specifically, Xiao et.…”
Section: B Spontaneous Credibility Equilibrium: Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…It is capable of learning an unknown environment's statistics as well as of taking actions in the environment so as to maximize the cumulative reward, where the environment itself may be changed by the agent's actions. Reinforcement learning has been widely adopted in communications and networks [37], in control [38], in finance and economics [39], as well as in social science [40]. Specifically, Xiao et.…”
Section: B Spontaneous Credibility Equilibrium: Reinforcement Learningmentioning
confidence: 99%
“…Finally, let us define σ R = 100 − ρ k , which is the percentage of users sharing genuine ratings. In such a case, for each σ R , we can evaluate the corresponding utility of each user by implementing the collaborative filtering algorithm of (37)(38) and the recommendation quality evaluation of (39). Fig.…”
Section: A Reward Function Verification 1) Recommender Systemsmentioning
confidence: 99%
“…Among other approaches, including those based on reinforcement learning, maximum-entropy reinforcement learning, smoothed best-response or fictitious play, it is important to highlight the contributions in [3], [7], [8], [18]- [23]. The main drawbacks of these contributions can be summarized in five points: (i) The converging point is a probability distribution over the set of all available channels and power allocations policies [21], [22], [30], [31]. Therefore, the optimization is often on the expectation of the performance metric and the optimality is often claimed in the asymptotic regime.…”
Section: A State Of the Artmentioning
confidence: 99%
“…They proposed a Markov approximation framework to study the convergence in probability. In [2], the authors studied a reinforcement-learning based framework for interference management in small cells networks and proposed self-organizing strategies for interference management in closed-access small cell networks with minimum information required to learn an equilibrium. In [3], the authors employed a semi-Markov decision process to study the admission control problem and design a power control game to reduce energy consumption.…”
Section: Introductionmentioning
confidence: 99%