2018
DOI: 10.1109/tsp.2018.2865408
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Beamform for Minimum Outage

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
41
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 36 publications
(41 citation statements)
references
References 46 publications
0
41
0
Order By: Relevance
“…Beamforming for minimum outage [63] has also been proven to be NP-hard even when the channel distribution is known exactly, and in fact no practically good approximation algorithm was known until very recently. Yet, relying on a sample average 'counting' approximation of outage, simple smoothing, and stochastic gradient updates, a lightweight and very effective algorithm was recently designed in [64] that performs remarkably well, using only recent channel data. The problem is formulated as follows:…”
Section: Machine Learning Based Resource Allocationmentioning
confidence: 99%
See 1 more Smart Citation
“…Beamforming for minimum outage [63] has also been proven to be NP-hard even when the channel distribution is known exactly, and in fact no practically good approximation algorithm was known until very recently. Yet, relying on a sample average 'counting' approximation of outage, simple smoothing, and stochastic gradient updates, a lightweight and very effective algorithm was recently designed in [64] that performs remarkably well, using only recent channel data. The problem is formulated as follows:…”
Section: Machine Learning Based Resource Allocationmentioning
confidence: 99%
“…The final step is to construct a smooth approximation of f (w; h), and optimize the resulting function using stochastic gradient descent. As it is shown in [64], this approach works unexpectedly well, on a problem that has challenged many disciplined optimization experts for years. Finally, the sum-rate optimal power control problem is known to be NP-hard, but we have good, albeit computationally expensive, approximation schemes at our disposal.…”
Section: Machine Learning Based Resource Allocationmentioning
confidence: 99%
“…However, there are few works that focus on the learning approach to optimize the beamforming design in multiantenna communications, with the exception of [42], [43], [44], [45], [46], [47]. The difficulty is partly due to the large number of complex variables contained in the beamforming matrix that need to be optimized.…”
Section: Introductionmentioning
confidence: 99%
“…The difficulty is partly due to the large number of complex variables contained in the beamforming matrix that need to be optimized. An outage-based approach to transmit beamforming was studied in [42] to deal with the channel uncertainty at the BS, however, only a single user was considered. The work in [43] designed a decentralized robust precoding scheme based on a deep neural network (DNN).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, deep learning (DL) has shown great potentials for improving the performance in communication system. At present, much attempts have been made to apply DL in areas of physical layer [5], resource allocation such as power control [6], [7], and beamforming [8]- [10]. For instance, [6] applied the full connected deep neural network (DNN) to approximate the weighted minimum mean square error (WMMSE) power allocation algorithm.…”
Section: Introductionmentioning
confidence: 99%