2019
DOI: 10.1109/mwc.2019.1800447
|View full text |Cite
|
Sign up to set email alerts
|

Model-Driven Deep Learning for Physical Layer Communications

Abstract: Intelligent communication is gradually becoming a mainstream direction. As a major branch of machine learning, deep learning (DL) has been applied in physical layer communications and demonstrated an impressive performance improvement in recent years. However, most of the existing works related to DL focus on data-driven approaches, which consider the communication system as a black box and train it by using a huge volume of data. Training a network requires sufficient computing resources and extensive time, b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
225
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 403 publications
(225 citation statements)
references
References 20 publications
0
225
0
Order By: Relevance
“…As shown in Fig. 4, K B red dash-dotted lines point to the next position of parameter Ω, which implies that K B trained source-task-specific parameters for k = 1, · · · , K B do 9 Initialize the parameter Ω k ← Ω 10 for g = 1, · · · , G Tr do 11 Obtain truncated gradient υ S,k using Eq. (18) 12 Update the parameter Ω S,k with gradient descent: Ω S,k ← Ω S,k − βυ S,k Update the parameter Ω unbiased 1 st and 2 nd moment vectors: Ω ← Ω − γμ 3 /( √ν 3 + ε) 19 end 20 Meta-trained network parameter: Ω Mt ← Ω 21 Meta-adaption and Testing 22 Initialize NMSE: NMSE Mt ← 0 23 for k = 1, · · · , K T do 24 Generate the datasets D Ad (k) and D Te (k) for T T (k) 25 Meta-adaption stage 26 Load the network parameter Ω T,k ← Ω Mt 27 for g = 1, · · · , G Ad do 28 Obtain truncated gradient υ T,k using Eq.…”
Section: Meta-adaption and Testing Stagesmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown in Fig. 4, K B red dash-dotted lines point to the next position of parameter Ω, which implies that K B trained source-task-specific parameters for k = 1, · · · , K B do 9 Initialize the parameter Ω k ← Ω 10 for g = 1, · · · , G Tr do 11 Obtain truncated gradient υ S,k using Eq. (18) 12 Update the parameter Ω S,k with gradient descent: Ω S,k ← Ω S,k − βυ S,k Update the parameter Ω unbiased 1 st and 2 nd moment vectors: Ω ← Ω − γμ 3 /( √ν 3 + ε) 19 end 20 Meta-trained network parameter: Ω Mt ← Ω 21 Meta-adaption and Testing 22 Initialize NMSE: NMSE Mt ← 0 23 for k = 1, · · · , K T do 24 Generate the datasets D Ad (k) and D Te (k) for T T (k) 25 Meta-adaption stage 26 Load the network parameter Ω T,k ← Ω Mt 27 for g = 1, · · · , G Ad do 28 Obtain truncated gradient υ T,k using Eq.…”
Section: Meta-adaption and Testing Stagesmentioning
confidence: 99%
“…Initialize the 1 st and 2 nd moment vectors: µ 2 , ν 2 ← 0 10 for g = 1, · · · , G Ad do 11 Update biased 1 st and 2 nd moment vectors using Eq. (15) 12…”
mentioning
confidence: 99%
“…training time in addition to a huge data set. On the other hand, model-driven DL constructs the network topology based on known domain knowledge and has been successfully applied to image reconstruction [29], sparse signal recovery [30]- [33], and wireless communications recently [1], [11].…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we propose a DL enabled beamforming optimization approach for SINR balancing to provide an improved performance-complexity tradeoff under perantenna power constraints. Inspired by the model driven learning philosophy [48], we propose to first learn the dual variables with reduced dimension rather than the original large beamforming matrix and then recover the beamforming solution from the learned dual solution, by exploiting the structure or model of the beamforming optimization problem. Our main contributions are summarized as follows:…”
Section: Introductionmentioning
confidence: 99%