2018
DOI: 10.1109/tsp.2018.2866382
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Optimize: Training Deep Neural Networks for Interference Management

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
606
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 725 publications
(607 citation statements)
references
References 16 publications
1
606
0
Order By: Relevance
“…II illustrates MLP and CNN based approaches for power control. From the numerical experiments in [1], [3], we observe a performance loss when K gets larger. For example, in [1], the performance gap to the WMMSE algorithm is 3% when K = 10 and becomes 12% when K = 30.…”
Section: B Existing Approaches' Limitationsmentioning
confidence: 87%
See 1 more Smart Citation
“…II illustrates MLP and CNN based approaches for power control. From the numerical experiments in [1], [3], we observe a performance loss when K gets larger. For example, in [1], the performance gap to the WMMSE algorithm is 3% when K = 10 and becomes 12% when K = 30.…”
Section: B Existing Approaches' Limitationsmentioning
confidence: 87%
“…From the numerical experiments in [1], [3], we observe a performance loss when K gets larger. For example, in [1], the performance gap to the WMMSE algorithm is 3% when K = 10 and becomes 12% when K = 30. From the perspective of approximation theory, an MLP with a sufficient number of parameters can learn anything if we have sufficient training samples [15].…”
Section: B Existing Approaches' Limitationsmentioning
confidence: 87%
“…The cumulative distribution function (CDF) for P sum in Fig. 9 is obtained over 5000 testing data sets [29]. It is observed that the total transmit power of the IoTDs obtained from the trained DNN is very close to that obtained from the proposed joint optimization while significantly outperforming the disjoint optimizations.…”
Section: Deep Neural Network Evaluationmentioning
confidence: 77%
“…Accordingly, in this section, we present a supervised deep learning method using the DNN to approximate the proposed Algorithm 2, such that by passing the input operating parameters of Algorithm 2 through a trained DNN gives a feasible output for the resource allocation for the C-RAN network with much reduced execution time. Furthermore, training the DNN is fairly convenient as the training samples can easily be obtained by running Algorithm 2 offline [29]. Next, we describe the DNN architecture used in our work, as shown in Fig.…”
Section: Low-complexity Implementation For the Joint Optimization mentioning
confidence: 99%
“…To reduce the on-line computational complexity, the idea of "learning to optimize" is proposed for solving variable optimization in [7]. Most recently, a novel framework of using deep learning to find the solution of constrained functional optimization is proposed in [8].…”
Section: State-of-the-artsmentioning
confidence: 99%