2021
DOI: 10.48550/arxiv.2107.01489
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Decentralized Wireless Resource Allocations with Graph Neural Networks

Zhiyang Wang,
Mark Eisen,
Alejandro Ribeiro

Abstract: We consider the broad class of decentralized optimal resource allocation problems in wireless networks, which can be formulated as a constrained statistical learning problems with a localized information structure. We develop the use of Aggregation Graph Neural Networks (Agg-GNNs), which process a sequence of delayed and potentially asynchronous graph aggregated state information obtained locally at each transmitter from multi-hop neighbors. We further utilize model-free primal-dual learning methods to optimiz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Specifically, ML was a main player in the 5G and beyond communication systems. It has been adopted to provide modern communication systems with a plethora of intelligent services and functions such as intelligent resource allocation [7], [8], link adaptation [9], [10], beamforming [11],…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, ML was a main player in the 5G and beyond communication systems. It has been adopted to provide modern communication systems with a plethora of intelligent services and functions such as intelligent resource allocation [7], [8], link adaptation [9], [10], beamforming [11],…”
Section: Introductionmentioning
confidence: 99%
“…Another approach, however, consists in training a DNN whose cost function is a weighted sum rate (WSR) of the users in the network, in an unsupervised manner [6,7,9]. Moreover, the model-free and distributed extensions are also introduced in the literature [10,11,12,13,14,15].…”
Section: Introductionmentioning
confidence: 99%