2018
DOI: 10.1109/mnet.2018.1800085
|View full text |Cite
|
Sign up to set email alerts
|

A Deep-Learning-Based Radio Resource Assignment Technique for 5G Ultra Dense Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
70
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 153 publications
(71 citation statements)
references
References 14 publications
0
70
0
1
Order By: Relevance
“…Recently, deep learning has also been introduced into resource allocation problems. [28] leverages the deep long short-term memory (LSTM) learning technique to make localized prediction of the traffic load at the ultra dense networks (UDN) base station. In [29], a damped three dimensional (D3D) message-passing algorithm (MPA) based on deep learning for resource allocation in cognitive radio networks has been proposed.…”
Section: Arxiv:191209302v1 [Csni] 18 Dec 2019mentioning
confidence: 99%
“…Recently, deep learning has also been introduced into resource allocation problems. [28] leverages the deep long short-term memory (LSTM) learning technique to make localized prediction of the traffic load at the ultra dense networks (UDN) base station. In [29], a damped three dimensional (D3D) message-passing algorithm (MPA) based on deep learning for resource allocation in cognitive radio networks has been proposed.…”
Section: Arxiv:191209302v1 [Csni] 18 Dec 2019mentioning
confidence: 99%
“…Then forward propagation is performed to obtain the corresponding network output and the cost function. Afterwards, the partial derivatives are calculated according to (17), (18), (19) and (20) to adjust the weight. The training process is terminated when the error Algorithm 1 Hybrid precoding algorithm based on BP neural network.…”
Section: Algorithm Summarymentioning
confidence: 99%
“…Calculate the gradient ∇ w (l) nm e 2 according to (16), (17), (18), (19) and (20); 7: Perform the back propagation via SGD and update the weights according to (12), (13) and (14); 8: Calculate the error of the test set. If the error is smaller than the threshold, skip to step 10; 9: end while 10: return Optimized hybrid precoding neural network F. falls to an acceptable range.…”
Section: Algorithm Summarymentioning
confidence: 99%
“…However, most of the existing applications of using deep neural network (DNN) are seen as a black box trained by exploiting a large amount of data available [25][26][27]. Hence, this data-driven deep learning makes neural network topology being lack of theoretical understandings and explanations.…”
mentioning
confidence: 99%
“…values in the range [0, 1], p n(0) m,k as random value under the power limit P max m . The non-linear transforms of neurons in each layer are defined by(23),(24),(25).Random initialization: In current deep learning literature, the weights in deep neural networks are generally initialized by random values. In this way, h n m,k , σ 2 m,k , A are initialized as random values following a Gaussian distribution with zero mean and well-chosen variance level.…”
mentioning
confidence: 99%