2021
DOI: 10.48550/arxiv.2108.13178
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Modular Meta-Learning for Power Control via Random Edge Graph Neural Networks

Abstract: In this paper, we consider the problem of power control for a wireless network with an arbitrarily time-varying topology, including the possible addition or removal of nodes. A data-driven design methodology that leverages graph neural networks (GNNs) is adopted in order to efficiently parametrize the power control policy mapping the channel state information (CSI) to transmit powers. The specific GNN architecture, known as random edge GNN (REGNN), defines a non-linear graph convolutional filter whose spatial … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 33 publications
0
5
0
Order By: Relevance
“…With the success of machine learning, and particularly deep learning, over the past few years, learning-based algorithms have emerged to solve challenging problems in wireless communications, including for resource management [13]. As a prominent example, for the class of power allocation problems, several approaches have been proposed that leverage techniques based on supervised, unsupervised, self-supervised, and reinforcement learning, as well as meta-learning and graph representation learning [14]- [29].…”
Section: Introductionmentioning
confidence: 99%
“…With the success of machine learning, and particularly deep learning, over the past few years, learning-based algorithms have emerged to solve challenging problems in wireless communications, including for resource management [13]. As a prominent example, for the class of power allocation problems, several approaches have been proposed that leverage techniques based on supervised, unsupervised, self-supervised, and reinforcement learning, as well as meta-learning and graph representation learning [14]- [29].…”
Section: Introductionmentioning
confidence: 99%
“…This is, however, not the case for conventional learning in slow-varying environments, for which the features tend to be too correlated, resulting in overfitting. As a final note, although absolute NMSE values close to 1 may be insufficient for use in applications such as precoding, they can provide useful information for other applications such as proactive resource allocation [ 40 , 64 ].…”
Section: Methodsmentioning
confidence: 99%
“…Previous applications of transfer learning to communication systems include beamforming for multi-user, multiple-input, single-output (MISO) downlink [ 25 ] and for intelligent reflecting surfaces (IRS)-assisted MISO downlink [ 26 ], and downlink channel prediction [ 27 , 28 ] (see also [ 25 , 27 ]). Meta-learning has been applied to communication systems, including demodulation [ 29 , 30 , 31 , 32 ], decoding [ 33 ], end-to-end design of encoding and decoding with and without a channel model [ 34 , 35 ]; MIMO detection [ 36 ], beamforming for multiuser MISO downlink systems via [ 37 ], layered division multiplexing for ultra-reliable communications [ 38 ], UAV trajectory design [ 39 ], and resource allocation [ 40 ].…”
Section: Introductionmentioning
confidence: 99%
“…It is clear that as the network size grows, generating high-quality labeled samples becomes exponentially more costly. This computational complexity has led to alternatives to supervised learning for training deep learning models in RRM problems, including unsupervised learning, reinforcement learning, self-supervised learning, and meta-learning, which do not necessarily rely on (extensive) labeling of the data for training the underlying neural networks [3][4][5][9][10][11][12][13][14][15][16]. However, except for a few recent studies, such as [3,8,11], little effort has been made to thoroughly compare the performance of models trained using supervised and unsupervised learning procedures.…”
Section: Introductionmentioning
confidence: 99%