2018 14th International Wireless Communications &Amp; Mobile Computing Conference (IWCMC) 2018
DOI: 10.1109/iwcmc.2018.8450520
|View full text |Cite
|
Sign up to set email alerts
|

Channel Assignment for D2D communication : A Regret Matching Based Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Distributed channel assignment among U‐D2D pairs is formulated as a non‐cooperative game and regret‐matching learning algorithm to find the optimal RB allocation [20]. Q‐learning‐based distributed channel and synchronous power allocation algorithm is proposed for a self‐organised femtocell network in [21].…”
Section: Related Workmentioning
confidence: 99%
“…Distributed channel assignment among U‐D2D pairs is formulated as a non‐cooperative game and regret‐matching learning algorithm to find the optimal RB allocation [20]. Q‐learning‐based distributed channel and synchronous power allocation algorithm is proposed for a self‐organised femtocell network in [21].…”
Section: Related Workmentioning
confidence: 99%
“…Finally, the last few years have seen the emergence of machine learning based approaches for resource allocation and interference mitigation in D2D enabled networks (e.g., Reference [ 14 ] and the references therein). However, in the literature [ 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ], we observe that light-weight, time critical on-line mechanisms for adapting resource allocation and enhancing ESE are not available. Furthermore, such works use typically centralized approaches that do not exploit the OSA resources available in this type of scenario; when the approach is partly distributed, as in Reference [ 23 ], Q-learning is exploited just for the system throughput.…”
Section: Related Workmentioning
confidence: 94%
“…It used by many research papers which show that Regret matching learning achieves better performance in term of system throughput more than the random allocation, and converges rapidly to a correlated equilibrium of the system. In addition of that, it allows us to consider the neighbors' actions and to adapt to the environment [16,[21][22][23].…”
Section: No-regret Learning Based On Matching Gamementioning
confidence: 99%