2019
DOI: 10.48550/arxiv.1906.02871
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Graph Embedding based Wireless Link Scheduling with Few Training Samples

Abstract: Link scheduling in device-to-device (D2D) networks is usually formulated as a non-convex combinatorial problem, which is generally NP-hard and difficult to get the optimal solution. Traditional methods to solve this problem are mainly based on mathematical optimization techniques, where accurate channel state information (CSI), usually obtained through channel estimation and feedback, is needed.To overcome the high computational complexity of the traditional methods and eliminate the costly channel estimation … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
34
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(34 citation statements)
references
References 20 publications
0
34
0
Order By: Relevance
“…However, as the resource management problem is often non-convex, the neural network is also nonconvex, and as such the optimization landscape becomes highly complicated. Due to the difficulty of optimization, unsupervised training does not always outperform the supervised one (see the comparison of Table XIII and XIV in [37]). A very recent paper also conducts a comprehensive theoretical comparison between supervised and unsupervised model [38].…”
Section: B Unsupervised Learningmentioning
confidence: 99%
“…However, as the resource management problem is often non-convex, the neural network is also nonconvex, and as such the optimization landscape becomes highly complicated. Due to the difficulty of optimization, unsupervised training does not always outperform the supervised one (see the comparison of Table XIII and XIV in [37]). A very recent paper also conducts a comprehensive theoretical comparison between supervised and unsupervised model [38].…”
Section: B Unsupervised Learningmentioning
confidence: 99%
“…Resource allocation methods, generally speaking, are usually addressed through optimization methods. Because of their non-convex nature in wireless systems, standard centralized resource allocation policies are obtained through heuristic optimization methods [1][2][3] or data-driven machine learning methods [4][5][6][7][8][9]. The latter case is seeing growing interest due to its applicability in a wide range of application scenarios and lack of reliance on explicit or accurate system modeling.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper we address the asynchronous decentralized wireless resource allocation problem with a novel unsupervised learning policy-based approach. By considering the interference patterns between transmitting devices as a graph [7][8][9], we capture the asynchrony patterns via the activation of the graph edges on a highly granular time scale. From this graph representation of interference and asynchrony, we implement a decentralized learning architecture as the Aggregation Graph Neural Networks (Agg-GNNs) [19].…”
Section: Introductionmentioning
confidence: 99%
“…Inspired by the success of machine learning (ML) in many fields, the wireless communications community has recently turned to ML to obtain more efficient methods for resource allocation problems, such as power allocation [13]- [17], link scheduling [18], [19], and user association [20]. All aforementioned works can be classified into three different learning paradigms.…”
Section: Introductionmentioning
confidence: 99%
“…For this learning paradigm, the input/output relation of a given resource optimization problem is regarded as a black box, which is directly learned by ML techniques especially the deep neural networks (DNNs). However, this learning paradigm is effective for resource allocation with only one kind of output variables [13], [18], [19] and does not work well for MINLP problems. The second one is the reinforcement learning paradigm.…”
Section: Introductionmentioning
confidence: 99%